Nov 25 11:36:29 crc systemd[1]: Starting Kubernetes Kubelet... Nov 25 11:36:29 crc restorecon[4685]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 11:36:29 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 11:36:30 crc restorecon[4685]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 11:36:30 crc restorecon[4685]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 25 11:36:31 crc kubenswrapper[4706]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 11:36:31 crc kubenswrapper[4706]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 25 11:36:31 crc kubenswrapper[4706]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 11:36:31 crc kubenswrapper[4706]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 11:36:31 crc kubenswrapper[4706]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 25 11:36:31 crc kubenswrapper[4706]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.682770 4706 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687812 4706 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687827 4706 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687835 4706 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687840 4706 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687844 4706 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687848 4706 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687852 4706 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687857 4706 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687861 4706 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687865 4706 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687869 4706 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687873 4706 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687878 4706 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687882 4706 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687886 4706 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687889 4706 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687897 4706 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687901 4706 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687905 4706 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687909 4706 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687912 4706 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687922 4706 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687925 4706 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687929 4706 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687933 4706 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687937 4706 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687940 4706 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687943 4706 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687947 4706 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687951 4706 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687954 4706 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687959 4706 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687962 4706 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687966 4706 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687970 4706 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687974 4706 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687977 4706 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687980 4706 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687984 4706 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687988 4706 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687991 4706 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687995 4706 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.687998 4706 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.688002 4706 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.688005 4706 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.688010 4706 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.688014 4706 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.688019 4706 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.688023 4706 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.688027 4706 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.688030 4706 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.688034 4706 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.688039 4706 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.688043 4706 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.688047 4706 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.688053 4706 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.688057 4706 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.688061 4706 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.688066 4706 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.688070 4706 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.688075 4706 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.688079 4706 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.688084 4706 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.688088 4706 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.688092 4706 feature_gate.go:330] unrecognized feature gate: Example Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.688096 4706 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.688100 4706 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.688104 4706 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.688109 4706 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.688113 4706 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.688118 4706 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.688858 4706 flags.go:64] FLAG: --address="0.0.0.0" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.688878 4706 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.688889 4706 flags.go:64] FLAG: --anonymous-auth="true" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.688897 4706 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.688905 4706 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.688910 4706 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.688917 4706 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.688923 4706 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.688928 4706 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.688932 4706 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.688937 4706 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.688941 4706 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.688946 4706 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.688950 4706 flags.go:64] FLAG: --cgroup-root="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.688954 4706 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.688959 4706 flags.go:64] FLAG: --client-ca-file="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.688963 4706 flags.go:64] FLAG: --cloud-config="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.688967 4706 flags.go:64] FLAG: --cloud-provider="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.688971 4706 flags.go:64] FLAG: --cluster-dns="[]" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.688978 4706 flags.go:64] FLAG: --cluster-domain="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.688982 4706 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.688986 4706 flags.go:64] FLAG: --config-dir="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.688990 4706 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.688995 4706 flags.go:64] FLAG: --container-log-max-files="5" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689001 4706 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689006 4706 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689010 4706 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689015 4706 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689019 4706 flags.go:64] FLAG: --contention-profiling="false" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689023 4706 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689028 4706 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689032 4706 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689036 4706 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689043 4706 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689047 4706 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689053 4706 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689057 4706 flags.go:64] FLAG: --enable-load-reader="false" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689061 4706 flags.go:64] FLAG: --enable-server="true" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689065 4706 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689071 4706 flags.go:64] FLAG: --event-burst="100" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689076 4706 flags.go:64] FLAG: --event-qps="50" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689080 4706 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689084 4706 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689088 4706 flags.go:64] FLAG: --eviction-hard="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689094 4706 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689098 4706 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689103 4706 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689107 4706 flags.go:64] FLAG: --eviction-soft="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689111 4706 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689116 4706 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689121 4706 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689126 4706 flags.go:64] FLAG: --experimental-mounter-path="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689131 4706 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689136 4706 flags.go:64] FLAG: --fail-swap-on="true" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689143 4706 flags.go:64] FLAG: --feature-gates="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689150 4706 flags.go:64] FLAG: --file-check-frequency="20s" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689155 4706 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689159 4706 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689163 4706 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689168 4706 flags.go:64] FLAG: --healthz-port="10248" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689172 4706 flags.go:64] FLAG: --help="false" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689176 4706 flags.go:64] FLAG: --hostname-override="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689180 4706 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689184 4706 flags.go:64] FLAG: --http-check-frequency="20s" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689189 4706 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689193 4706 flags.go:64] FLAG: --image-credential-provider-config="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689197 4706 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689202 4706 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689207 4706 flags.go:64] FLAG: --image-service-endpoint="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689211 4706 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689215 4706 flags.go:64] FLAG: --kube-api-burst="100" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689219 4706 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689223 4706 flags.go:64] FLAG: --kube-api-qps="50" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689227 4706 flags.go:64] FLAG: --kube-reserved="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689231 4706 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689235 4706 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689239 4706 flags.go:64] FLAG: --kubelet-cgroups="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689244 4706 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689247 4706 flags.go:64] FLAG: --lock-file="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689251 4706 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689255 4706 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689259 4706 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689266 4706 flags.go:64] FLAG: --log-json-split-stream="false" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689270 4706 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689274 4706 flags.go:64] FLAG: --log-text-split-stream="false" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689278 4706 flags.go:64] FLAG: --logging-format="text" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689282 4706 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689288 4706 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689291 4706 flags.go:64] FLAG: --manifest-url="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689310 4706 flags.go:64] FLAG: --manifest-url-header="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689316 4706 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689321 4706 flags.go:64] FLAG: --max-open-files="1000000" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689326 4706 flags.go:64] FLAG: --max-pods="110" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689331 4706 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689335 4706 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689339 4706 flags.go:64] FLAG: --memory-manager-policy="None" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689343 4706 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689347 4706 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689352 4706 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689356 4706 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689366 4706 flags.go:64] FLAG: --node-status-max-images="50" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689371 4706 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689376 4706 flags.go:64] FLAG: --oom-score-adj="-999" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689382 4706 flags.go:64] FLAG: --pod-cidr="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689389 4706 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689398 4706 flags.go:64] FLAG: --pod-manifest-path="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689403 4706 flags.go:64] FLAG: --pod-max-pids="-1" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689408 4706 flags.go:64] FLAG: --pods-per-core="0" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689413 4706 flags.go:64] FLAG: --port="10250" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689419 4706 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689424 4706 flags.go:64] FLAG: --provider-id="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689429 4706 flags.go:64] FLAG: --qos-reserved="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689434 4706 flags.go:64] FLAG: --read-only-port="10255" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689438 4706 flags.go:64] FLAG: --register-node="true" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689443 4706 flags.go:64] FLAG: --register-schedulable="true" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689447 4706 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689456 4706 flags.go:64] FLAG: --registry-burst="10" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689460 4706 flags.go:64] FLAG: --registry-qps="5" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689465 4706 flags.go:64] FLAG: --reserved-cpus="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689469 4706 flags.go:64] FLAG: --reserved-memory="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689475 4706 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689479 4706 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689485 4706 flags.go:64] FLAG: --rotate-certificates="false" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689489 4706 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689493 4706 flags.go:64] FLAG: --runonce="false" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689497 4706 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689502 4706 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689506 4706 flags.go:64] FLAG: --seccomp-default="false" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689511 4706 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689515 4706 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689519 4706 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689525 4706 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689530 4706 flags.go:64] FLAG: --storage-driver-password="root" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689534 4706 flags.go:64] FLAG: --storage-driver-secure="false" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689539 4706 flags.go:64] FLAG: --storage-driver-table="stats" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689543 4706 flags.go:64] FLAG: --storage-driver-user="root" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689547 4706 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689551 4706 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689556 4706 flags.go:64] FLAG: --system-cgroups="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689560 4706 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689567 4706 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689572 4706 flags.go:64] FLAG: --tls-cert-file="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689576 4706 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689582 4706 flags.go:64] FLAG: --tls-min-version="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689586 4706 flags.go:64] FLAG: --tls-private-key-file="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689590 4706 flags.go:64] FLAG: --topology-manager-policy="none" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689595 4706 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689599 4706 flags.go:64] FLAG: --topology-manager-scope="container" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689603 4706 flags.go:64] FLAG: --v="2" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689611 4706 flags.go:64] FLAG: --version="false" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689617 4706 flags.go:64] FLAG: --vmodule="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689623 4706 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.689628 4706 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689756 4706 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689762 4706 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689768 4706 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689772 4706 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689777 4706 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689782 4706 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689787 4706 feature_gate.go:330] unrecognized feature gate: Example Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689793 4706 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689799 4706 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689804 4706 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689808 4706 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689813 4706 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689818 4706 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689823 4706 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689828 4706 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689832 4706 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689835 4706 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689840 4706 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689845 4706 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689849 4706 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689853 4706 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689858 4706 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689861 4706 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689866 4706 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689871 4706 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689874 4706 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689878 4706 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689882 4706 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689885 4706 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689889 4706 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689892 4706 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689896 4706 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689900 4706 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689904 4706 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689907 4706 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689911 4706 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689915 4706 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689918 4706 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689922 4706 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689926 4706 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689930 4706 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689934 4706 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689937 4706 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689942 4706 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689945 4706 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689949 4706 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689952 4706 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689956 4706 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689960 4706 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689963 4706 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689967 4706 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689971 4706 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689974 4706 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689992 4706 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.689996 4706 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.690000 4706 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.690004 4706 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.690009 4706 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.690013 4706 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.690018 4706 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.690021 4706 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.690026 4706 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.690030 4706 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.690034 4706 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.690038 4706 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.690042 4706 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.690045 4706 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.690049 4706 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.690053 4706 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.690056 4706 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.690060 4706 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.690071 4706 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.701172 4706 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.701229 4706 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701382 4706 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701398 4706 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701407 4706 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701416 4706 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701424 4706 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701432 4706 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701442 4706 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701450 4706 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701458 4706 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701467 4706 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701475 4706 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701483 4706 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701494 4706 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701505 4706 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701514 4706 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701523 4706 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701532 4706 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701541 4706 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701550 4706 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701558 4706 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701566 4706 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701574 4706 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701582 4706 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701589 4706 feature_gate.go:330] unrecognized feature gate: Example Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701597 4706 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701606 4706 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701618 4706 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701629 4706 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701637 4706 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701648 4706 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701657 4706 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701665 4706 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701673 4706 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701681 4706 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701691 4706 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701699 4706 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701707 4706 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701716 4706 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701724 4706 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701732 4706 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701740 4706 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701750 4706 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701762 4706 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701772 4706 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701781 4706 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701789 4706 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701798 4706 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701806 4706 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701814 4706 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701823 4706 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701831 4706 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701840 4706 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701847 4706 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701856 4706 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701863 4706 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701871 4706 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701879 4706 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701887 4706 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701898 4706 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701906 4706 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701914 4706 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701921 4706 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701929 4706 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701938 4706 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701945 4706 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701953 4706 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701961 4706 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701969 4706 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701977 4706 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701984 4706 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.701993 4706 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.702006 4706 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702230 4706 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702246 4706 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702255 4706 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702264 4706 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702272 4706 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702282 4706 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702294 4706 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702339 4706 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702353 4706 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702364 4706 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702376 4706 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702389 4706 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702401 4706 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702412 4706 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702422 4706 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702432 4706 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702442 4706 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702453 4706 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702463 4706 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702472 4706 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702480 4706 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702488 4706 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702496 4706 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702504 4706 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702511 4706 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702519 4706 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702527 4706 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702535 4706 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702542 4706 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702550 4706 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702558 4706 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702566 4706 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702576 4706 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702586 4706 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702597 4706 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702605 4706 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702613 4706 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702621 4706 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702630 4706 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702637 4706 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702646 4706 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702653 4706 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702663 4706 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702671 4706 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702679 4706 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702687 4706 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702695 4706 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702703 4706 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702711 4706 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702720 4706 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702729 4706 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702738 4706 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702746 4706 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702754 4706 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702762 4706 feature_gate.go:330] unrecognized feature gate: Example Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702770 4706 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702778 4706 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702788 4706 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702798 4706 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702808 4706 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702817 4706 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702825 4706 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702834 4706 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702842 4706 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702851 4706 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702861 4706 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702872 4706 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702883 4706 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702893 4706 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702903 4706 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.702914 4706 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.702931 4706 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.703959 4706 server.go:940] "Client rotation is on, will bootstrap in background" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.710102 4706 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.710240 4706 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.712037 4706 server.go:997] "Starting client certificate rotation" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.712089 4706 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.712325 4706 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-16 15:10:47.129961896 +0000 UTC Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.712401 4706 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 1251h34m15.417563664s for next certificate rotation Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.741291 4706 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.744248 4706 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.763240 4706 log.go:25] "Validated CRI v1 runtime API" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.806652 4706 log.go:25] "Validated CRI v1 image API" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.810012 4706 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.816995 4706 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-25-11-32-12-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.817042 4706 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.833635 4706 manager.go:217] Machine: {Timestamp:2025-11-25 11:36:31.830584219 +0000 UTC m=+0.745141620 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:7dac62ec-3979-4862-b1af-b63212907795 BootID:30198dc8-e58c-4847-a541-041da1924c5c Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:57:3e:b1 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:57:3e:b1 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:f1:18:af Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:87:48:ff Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:83:5d:1d Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:72:78:50 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:06:1c:71:1b:37:87 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:6a:74:60:3d:7d:73 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.833921 4706 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.834112 4706 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.836618 4706 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.836821 4706 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.836866 4706 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.837137 4706 topology_manager.go:138] "Creating topology manager with none policy" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.837152 4706 container_manager_linux.go:303] "Creating device plugin manager" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.837570 4706 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.837607 4706 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.837858 4706 state_mem.go:36] "Initialized new in-memory state store" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.837959 4706 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.843495 4706 kubelet.go:418] "Attempting to sync node with API server" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.843529 4706 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.843581 4706 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.843601 4706 kubelet.go:324] "Adding apiserver pod source" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.843639 4706 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.853069 4706 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Nov 25 11:36:31 crc kubenswrapper[4706]: E1125 11:36:31.853193 4706 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.853672 4706 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.854709 4706 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Nov 25 11:36:31 crc kubenswrapper[4706]: E1125 11:36:31.854845 4706 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.856246 4706 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.857888 4706 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.859571 4706 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.859602 4706 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.859643 4706 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.859652 4706 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.859668 4706 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.859679 4706 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.859688 4706 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.859700 4706 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.859712 4706 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.859723 4706 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.859743 4706 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.859751 4706 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.862251 4706 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.862874 4706 server.go:1280] "Started kubelet" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.863952 4706 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.863964 4706 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.864881 4706 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.865264 4706 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 25 11:36:31 crc systemd[1]: Started Kubernetes Kubelet. Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.865325 4706 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.866505 4706 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.866665 4706 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 16:43:10.706927705 +0000 UTC Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.866712 4706 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 1157h6m38.840218849s for next certificate rotation Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.866837 4706 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 25 11:36:31 crc kubenswrapper[4706]: E1125 11:36:31.866887 4706 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.866921 4706 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.866948 4706 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.868024 4706 factory.go:55] Registering systemd factory Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.868056 4706 factory.go:221] Registration of the systemd container factory successfully Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.868001 4706 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Nov 25 11:36:31 crc kubenswrapper[4706]: E1125 11:36:31.868384 4706 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Nov 25 11:36:31 crc kubenswrapper[4706]: E1125 11:36:31.868408 4706 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="200ms" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.868450 4706 factory.go:153] Registering CRI-O factory Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.868677 4706 factory.go:221] Registration of the crio container factory successfully Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.868864 4706 server.go:460] "Adding debug handlers to kubelet server" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.869095 4706 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.869234 4706 factory.go:103] Registering Raw factory Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.869403 4706 manager.go:1196] Started watching for new ooms in manager Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.870200 4706 manager.go:319] Starting recovery of all containers Nov 25 11:36:31 crc kubenswrapper[4706]: E1125 11:36:31.869497 4706 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.13:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187b3cdb5ab294bc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 11:36:31.86283846 +0000 UTC m=+0.777395841,LastTimestamp:2025-11-25 11:36:31.86283846 +0000 UTC m=+0.777395841,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.879825 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.879900 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.879921 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.879934 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.879947 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.879961 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.879973 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.879986 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880003 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880017 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880029 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880042 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880057 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880074 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880088 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880099 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880111 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880127 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880142 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880156 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880169 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880182 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880198 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880216 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880231 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880248 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880264 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880282 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880315 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880343 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880363 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880378 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880393 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880414 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880434 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880456 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880472 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880488 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880502 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880516 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880537 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880553 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880568 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880581 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880594 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880608 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880623 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880637 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880650 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880667 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880683 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880698 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880719 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880735 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880752 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880768 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880842 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880895 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880912 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880927 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880943 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880957 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880973 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.880987 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881001 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881015 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881031 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881046 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881059 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881073 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881087 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881100 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881117 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881132 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881147 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881161 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881207 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881227 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881244 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881258 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881273 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881290 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881321 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881335 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881348 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881364 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881377 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881390 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881411 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881425 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881439 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881462 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881476 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881530 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881548 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881562 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881577 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881592 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881608 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881661 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881678 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881717 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881731 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881746 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881773 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881793 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881808 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881826 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881842 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881858 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881875 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881889 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881901 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881914 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881926 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881937 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881947 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881957 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881969 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881980 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.881993 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882005 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882017 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882028 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882042 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882052 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882064 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882079 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882094 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882112 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882127 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882141 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882155 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882170 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882183 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882199 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882220 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882237 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882253 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882269 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882284 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882318 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882335 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882349 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882364 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882379 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882393 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882408 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882421 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882436 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882451 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882465 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882481 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882494 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882507 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882520 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882535 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882548 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882563 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882576 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882592 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882605 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882618 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882632 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882645 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882660 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882673 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882687 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882707 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882726 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882742 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882756 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882770 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882786 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882799 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882815 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882829 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882844 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882857 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882870 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882884 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882899 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882934 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882948 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882963 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882977 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.882990 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.883006 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.883020 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.883035 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.883050 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.883064 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.884829 4706 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.884868 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.884885 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.884901 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.884917 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.884934 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.884952 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.884966 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.884982 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.884997 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.885012 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.885028 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.885042 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.885058 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.885072 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.885087 4706 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.885099 4706 reconstruct.go:97] "Volume reconstruction finished" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.885109 4706 reconciler.go:26] "Reconciler: start to sync state" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.890776 4706 manager.go:324] Recovery completed Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.899194 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.901744 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.901790 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.901802 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.902783 4706 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.902808 4706 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.902835 4706 state_mem.go:36] "Initialized new in-memory state store" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.918676 4706 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.920801 4706 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.920880 4706 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.920931 4706 kubelet.go:2335] "Starting kubelet main sync loop" Nov 25 11:36:31 crc kubenswrapper[4706]: E1125 11:36:31.920999 4706 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 25 11:36:31 crc kubenswrapper[4706]: W1125 11:36:31.921774 4706 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Nov 25 11:36:31 crc kubenswrapper[4706]: E1125 11:36:31.921843 4706 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.929402 4706 policy_none.go:49] "None policy: Start" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.930672 4706 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.930711 4706 state_mem.go:35] "Initializing new in-memory state store" Nov 25 11:36:31 crc kubenswrapper[4706]: E1125 11:36:31.967772 4706 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.984722 4706 manager.go:334] "Starting Device Plugin manager" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.985045 4706 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.985074 4706 server.go:79] "Starting device plugin registration server" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.985551 4706 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.985568 4706 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.985717 4706 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.985860 4706 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 25 11:36:31 crc kubenswrapper[4706]: I1125 11:36:31.985868 4706 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 25 11:36:31 crc kubenswrapper[4706]: E1125 11:36:31.992727 4706 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.021147 4706 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.021312 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.022583 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.022615 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.022625 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.022849 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.023619 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.023681 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.023935 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.023967 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.023982 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.024219 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.024488 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.024718 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.025530 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.025619 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.025648 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.026059 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.026062 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.026136 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.026152 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.026096 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.026197 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.026243 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.026488 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.026520 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.027066 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.027110 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.027122 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.027200 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.027211 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.027219 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.027228 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.027404 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.027455 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.028910 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.028940 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.028954 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.028910 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.029083 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.029098 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.029397 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.029425 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.030007 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.030027 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.030042 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:32 crc kubenswrapper[4706]: E1125 11:36:32.069652 4706 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="400ms" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.086111 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.087407 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.087452 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.087477 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.087501 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.087526 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.087615 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.087670 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.087717 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.087743 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.087767 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.087787 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.087932 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.088104 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.088168 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.088189 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.088465 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.088504 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.088516 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.088544 4706 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 11:36:32 crc kubenswrapper[4706]: E1125 11:36:32.089168 4706 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.13:6443: connect: connection refused" node="crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.189883 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.189946 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.189979 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.190014 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.190032 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.190044 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.190081 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.190092 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.190153 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.190109 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.190170 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.190214 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.190226 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.190205 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.190166 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.190204 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.190332 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.190363 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.190393 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.190416 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.190437 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.190442 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.190390 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.190466 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.190497 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.190506 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.190519 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.190530 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.190552 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.190600 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.290051 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.291932 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.291977 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.291989 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.292019 4706 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 11:36:32 crc kubenswrapper[4706]: E1125 11:36:32.292430 4706 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.13:6443: connect: connection refused" node="crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.354050 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.370192 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.391911 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.398060 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.402374 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 25 11:36:32 crc kubenswrapper[4706]: W1125 11:36:32.411166 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-b5dfe8e7cabeb8d0f453313a413ee234a7a2b606760bbd2163a0ff66446cf149 WatchSource:0}: Error finding container b5dfe8e7cabeb8d0f453313a413ee234a7a2b606760bbd2163a0ff66446cf149: Status 404 returned error can't find the container with id b5dfe8e7cabeb8d0f453313a413ee234a7a2b606760bbd2163a0ff66446cf149 Nov 25 11:36:32 crc kubenswrapper[4706]: W1125 11:36:32.412377 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-7442baa3914d39fd2df161eed8c62b08cf23112c43ea1c12c7ab95519fd28ece WatchSource:0}: Error finding container 7442baa3914d39fd2df161eed8c62b08cf23112c43ea1c12c7ab95519fd28ece: Status 404 returned error can't find the container with id 7442baa3914d39fd2df161eed8c62b08cf23112c43ea1c12c7ab95519fd28ece Nov 25 11:36:32 crc kubenswrapper[4706]: W1125 11:36:32.421945 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-4e9eb6c6ea8dfc9a375fc24a23ad274eaabee95c1ce97a98f6813e16f140e8fe WatchSource:0}: Error finding container 4e9eb6c6ea8dfc9a375fc24a23ad274eaabee95c1ce97a98f6813e16f140e8fe: Status 404 returned error can't find the container with id 4e9eb6c6ea8dfc9a375fc24a23ad274eaabee95c1ce97a98f6813e16f140e8fe Nov 25 11:36:32 crc kubenswrapper[4706]: W1125 11:36:32.424807 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-59099d971cde30263e3a76e743eebc6e4f0716246aeaf039f54767e9a15e1551 WatchSource:0}: Error finding container 59099d971cde30263e3a76e743eebc6e4f0716246aeaf039f54767e9a15e1551: Status 404 returned error can't find the container with id 59099d971cde30263e3a76e743eebc6e4f0716246aeaf039f54767e9a15e1551 Nov 25 11:36:32 crc kubenswrapper[4706]: W1125 11:36:32.428672 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-b21fe123f23aef6431a985eeda7dbc37b2f74799ea267448e3fa24764d395afc WatchSource:0}: Error finding container b21fe123f23aef6431a985eeda7dbc37b2f74799ea267448e3fa24764d395afc: Status 404 returned error can't find the container with id b21fe123f23aef6431a985eeda7dbc37b2f74799ea267448e3fa24764d395afc Nov 25 11:36:32 crc kubenswrapper[4706]: E1125 11:36:32.471325 4706 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="800ms" Nov 25 11:36:32 crc kubenswrapper[4706]: W1125 11:36:32.687419 4706 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Nov 25 11:36:32 crc kubenswrapper[4706]: E1125 11:36:32.687544 4706 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.692912 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.694243 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.694280 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.694290 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.694339 4706 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 11:36:32 crc kubenswrapper[4706]: E1125 11:36:32.694832 4706 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.13:6443: connect: connection refused" node="crc" Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.866417 4706 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.925338 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7442baa3914d39fd2df161eed8c62b08cf23112c43ea1c12c7ab95519fd28ece"} Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.926325 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b5dfe8e7cabeb8d0f453313a413ee234a7a2b606760bbd2163a0ff66446cf149"} Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.927098 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b21fe123f23aef6431a985eeda7dbc37b2f74799ea267448e3fa24764d395afc"} Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.928065 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"59099d971cde30263e3a76e743eebc6e4f0716246aeaf039f54767e9a15e1551"} Nov 25 11:36:32 crc kubenswrapper[4706]: I1125 11:36:32.929026 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4e9eb6c6ea8dfc9a375fc24a23ad274eaabee95c1ce97a98f6813e16f140e8fe"} Nov 25 11:36:32 crc kubenswrapper[4706]: W1125 11:36:32.933817 4706 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Nov 25 11:36:32 crc kubenswrapper[4706]: E1125 11:36:32.933900 4706 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Nov 25 11:36:33 crc kubenswrapper[4706]: W1125 11:36:33.179939 4706 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Nov 25 11:36:33 crc kubenswrapper[4706]: E1125 11:36:33.180051 4706 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Nov 25 11:36:33 crc kubenswrapper[4706]: W1125 11:36:33.219903 4706 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Nov 25 11:36:33 crc kubenswrapper[4706]: E1125 11:36:33.220012 4706 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Nov 25 11:36:33 crc kubenswrapper[4706]: E1125 11:36:33.272915 4706 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="1.6s" Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.495207 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.496741 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.496812 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.496823 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.496861 4706 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 11:36:33 crc kubenswrapper[4706]: E1125 11:36:33.497516 4706 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.13:6443: connect: connection refused" node="crc" Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.866396 4706 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.933788 4706 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c" exitCode=0 Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.933974 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.934262 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c"} Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.935197 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.935286 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.935315 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.936705 4706 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="3a03748c4ae77a0195537510fbf39f425fb59b820b719972a26c1cbaa4e1faa0" exitCode=0 Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.936777 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"3a03748c4ae77a0195537510fbf39f425fb59b820b719972a26c1cbaa4e1faa0"} Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.936792 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.938235 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.938260 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.938270 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.939506 4706 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="0299d89c1a2ea9c2a4bb46691aecd2d86618d3620e7406e1af57e1c03ce50b94" exitCode=0 Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.939590 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"0299d89c1a2ea9c2a4bb46691aecd2d86618d3620e7406e1af57e1c03ce50b94"} Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.939645 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.940727 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.940773 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.940789 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.942536 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2"} Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.942574 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd"} Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.942588 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5"} Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.942600 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d"} Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.942756 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.944498 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.944539 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.944551 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8"} Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.944558 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.944532 4706 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8" exitCode=0 Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.944884 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.949113 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.949167 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.949185 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.952588 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.953759 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.953829 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:33 crc kubenswrapper[4706]: I1125 11:36:33.953840 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:34 crc kubenswrapper[4706]: I1125 11:36:34.866267 4706 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Nov 25 11:36:34 crc kubenswrapper[4706]: E1125 11:36:34.875105 4706 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="3.2s" Nov 25 11:36:34 crc kubenswrapper[4706]: I1125 11:36:34.948506 4706 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a" exitCode=0 Nov 25 11:36:34 crc kubenswrapper[4706]: I1125 11:36:34.948587 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a"} Nov 25 11:36:34 crc kubenswrapper[4706]: I1125 11:36:34.948750 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:34 crc kubenswrapper[4706]: I1125 11:36:34.950047 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:34 crc kubenswrapper[4706]: I1125 11:36:34.950080 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:34 crc kubenswrapper[4706]: I1125 11:36:34.950091 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:34 crc kubenswrapper[4706]: I1125 11:36:34.953706 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:34 crc kubenswrapper[4706]: I1125 11:36:34.953681 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"44f97c784f83c5f2d1cfce3f39f43a832fa8da73add257ae9c39f001bbfe3999"} Nov 25 11:36:34 crc kubenswrapper[4706]: I1125 11:36:34.955352 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:34 crc kubenswrapper[4706]: I1125 11:36:34.955387 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:34 crc kubenswrapper[4706]: I1125 11:36:34.955399 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:34 crc kubenswrapper[4706]: I1125 11:36:34.960107 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"78068d04cf52a463ca3595227c44918d360266c71afc97c1792e48b004bebe42"} Nov 25 11:36:34 crc kubenswrapper[4706]: I1125 11:36:34.960182 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:34 crc kubenswrapper[4706]: I1125 11:36:34.960195 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7224a1c52df964a792e6197a4f97313b139ffbd6d65820d93e36561e817ddc20"} Nov 25 11:36:34 crc kubenswrapper[4706]: I1125 11:36:34.960216 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b50a8135a692a512f05f3a902977e8b7a505d8346fb6e96c26ffc58d075e902c"} Nov 25 11:36:34 crc kubenswrapper[4706]: I1125 11:36:34.961345 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:34 crc kubenswrapper[4706]: I1125 11:36:34.961384 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:34 crc kubenswrapper[4706]: I1125 11:36:34.961397 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:34 crc kubenswrapper[4706]: I1125 11:36:34.964681 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69"} Nov 25 11:36:34 crc kubenswrapper[4706]: I1125 11:36:34.964727 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32"} Nov 25 11:36:34 crc kubenswrapper[4706]: I1125 11:36:34.964736 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:34 crc kubenswrapper[4706]: I1125 11:36:34.964738 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27"} Nov 25 11:36:34 crc kubenswrapper[4706]: I1125 11:36:34.964835 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b"} Nov 25 11:36:34 crc kubenswrapper[4706]: I1125 11:36:34.965520 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:34 crc kubenswrapper[4706]: I1125 11:36:34.965570 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:34 crc kubenswrapper[4706]: I1125 11:36:34.965580 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:35 crc kubenswrapper[4706]: W1125 11:36:35.068852 4706 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Nov 25 11:36:35 crc kubenswrapper[4706]: E1125 11:36:35.068974 4706 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Nov 25 11:36:35 crc kubenswrapper[4706]: I1125 11:36:35.098073 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:35 crc kubenswrapper[4706]: I1125 11:36:35.099613 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:35 crc kubenswrapper[4706]: I1125 11:36:35.099684 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:35 crc kubenswrapper[4706]: I1125 11:36:35.099714 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:35 crc kubenswrapper[4706]: I1125 11:36:35.099751 4706 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 11:36:35 crc kubenswrapper[4706]: E1125 11:36:35.100748 4706 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.13:6443: connect: connection refused" node="crc" Nov 25 11:36:35 crc kubenswrapper[4706]: W1125 11:36:35.132073 4706 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Nov 25 11:36:35 crc kubenswrapper[4706]: E1125 11:36:35.132169 4706 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Nov 25 11:36:35 crc kubenswrapper[4706]: I1125 11:36:35.970116 4706 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a" exitCode=0 Nov 25 11:36:35 crc kubenswrapper[4706]: I1125 11:36:35.970239 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:35 crc kubenswrapper[4706]: I1125 11:36:35.970221 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a"} Nov 25 11:36:35 crc kubenswrapper[4706]: I1125 11:36:35.972250 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:35 crc kubenswrapper[4706]: I1125 11:36:35.972387 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:35 crc kubenswrapper[4706]: I1125 11:36:35.972478 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:35 crc kubenswrapper[4706]: I1125 11:36:35.975620 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206"} Nov 25 11:36:35 crc kubenswrapper[4706]: I1125 11:36:35.975687 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:35 crc kubenswrapper[4706]: I1125 11:36:35.975817 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 11:36:35 crc kubenswrapper[4706]: I1125 11:36:35.975864 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:35 crc kubenswrapper[4706]: I1125 11:36:35.976009 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:35 crc kubenswrapper[4706]: I1125 11:36:35.976785 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:35 crc kubenswrapper[4706]: I1125 11:36:35.976887 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:35 crc kubenswrapper[4706]: I1125 11:36:35.976959 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:35 crc kubenswrapper[4706]: I1125 11:36:35.977767 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:35 crc kubenswrapper[4706]: I1125 11:36:35.977869 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:35 crc kubenswrapper[4706]: I1125 11:36:35.977954 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:35 crc kubenswrapper[4706]: I1125 11:36:35.977791 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:35 crc kubenswrapper[4706]: I1125 11:36:35.978103 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:35 crc kubenswrapper[4706]: I1125 11:36:35.978121 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:36 crc kubenswrapper[4706]: I1125 11:36:36.984889 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea"} Nov 25 11:36:36 crc kubenswrapper[4706]: I1125 11:36:36.984981 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a"} Nov 25 11:36:36 crc kubenswrapper[4706]: I1125 11:36:36.985006 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82"} Nov 25 11:36:36 crc kubenswrapper[4706]: I1125 11:36:36.985024 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162"} Nov 25 11:36:36 crc kubenswrapper[4706]: I1125 11:36:36.984981 4706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 11:36:36 crc kubenswrapper[4706]: I1125 11:36:36.985074 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:36 crc kubenswrapper[4706]: I1125 11:36:36.985074 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:36 crc kubenswrapper[4706]: I1125 11:36:36.986264 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:36 crc kubenswrapper[4706]: I1125 11:36:36.986324 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:36 crc kubenswrapper[4706]: I1125 11:36:36.986338 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:36 crc kubenswrapper[4706]: I1125 11:36:36.987076 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:36 crc kubenswrapper[4706]: I1125 11:36:36.987196 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:36 crc kubenswrapper[4706]: I1125 11:36:36.987317 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:37 crc kubenswrapper[4706]: I1125 11:36:37.027983 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 11:36:37 crc kubenswrapper[4706]: I1125 11:36:37.990549 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6"} Nov 25 11:36:37 crc kubenswrapper[4706]: I1125 11:36:37.990591 4706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 11:36:37 crc kubenswrapper[4706]: I1125 11:36:37.990614 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:37 crc kubenswrapper[4706]: I1125 11:36:37.990623 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:37 crc kubenswrapper[4706]: I1125 11:36:37.991405 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:37 crc kubenswrapper[4706]: I1125 11:36:37.991432 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:37 crc kubenswrapper[4706]: I1125 11:36:37.991453 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:37 crc kubenswrapper[4706]: I1125 11:36:37.991405 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:37 crc kubenswrapper[4706]: I1125 11:36:37.991540 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:37 crc kubenswrapper[4706]: I1125 11:36:37.991558 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:38 crc kubenswrapper[4706]: I1125 11:36:38.301225 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:38 crc kubenswrapper[4706]: I1125 11:36:38.302964 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:38 crc kubenswrapper[4706]: I1125 11:36:38.303014 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:38 crc kubenswrapper[4706]: I1125 11:36:38.303030 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:38 crc kubenswrapper[4706]: I1125 11:36:38.303063 4706 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 11:36:38 crc kubenswrapper[4706]: I1125 11:36:38.610369 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 11:36:38 crc kubenswrapper[4706]: I1125 11:36:38.992824 4706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 11:36:38 crc kubenswrapper[4706]: I1125 11:36:38.992880 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:38 crc kubenswrapper[4706]: I1125 11:36:38.992885 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:38 crc kubenswrapper[4706]: I1125 11:36:38.993850 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:38 crc kubenswrapper[4706]: I1125 11:36:38.993896 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:38 crc kubenswrapper[4706]: I1125 11:36:38.993907 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:38 crc kubenswrapper[4706]: I1125 11:36:38.993850 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:38 crc kubenswrapper[4706]: I1125 11:36:38.993984 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:38 crc kubenswrapper[4706]: I1125 11:36:38.993994 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:39 crc kubenswrapper[4706]: I1125 11:36:39.116260 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 11:36:39 crc kubenswrapper[4706]: I1125 11:36:39.116511 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:39 crc kubenswrapper[4706]: I1125 11:36:39.118045 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:39 crc kubenswrapper[4706]: I1125 11:36:39.118112 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:39 crc kubenswrapper[4706]: I1125 11:36:39.118139 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:39 crc kubenswrapper[4706]: I1125 11:36:39.124970 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 11:36:39 crc kubenswrapper[4706]: I1125 11:36:39.353439 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 25 11:36:39 crc kubenswrapper[4706]: I1125 11:36:39.906257 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 11:36:39 crc kubenswrapper[4706]: I1125 11:36:39.996405 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:39 crc kubenswrapper[4706]: I1125 11:36:39.996440 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:39 crc kubenswrapper[4706]: I1125 11:36:39.996483 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:39 crc kubenswrapper[4706]: I1125 11:36:39.998439 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:39 crc kubenswrapper[4706]: I1125 11:36:39.998477 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:39 crc kubenswrapper[4706]: I1125 11:36:39.998489 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:39 crc kubenswrapper[4706]: I1125 11:36:39.998804 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:39 crc kubenswrapper[4706]: I1125 11:36:39.998875 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:39 crc kubenswrapper[4706]: I1125 11:36:39.998894 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:39 crc kubenswrapper[4706]: I1125 11:36:39.999490 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:39 crc kubenswrapper[4706]: I1125 11:36:39.999722 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:39 crc kubenswrapper[4706]: I1125 11:36:39.999778 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:40 crc kubenswrapper[4706]: I1125 11:36:40.015489 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 11:36:40 crc kubenswrapper[4706]: I1125 11:36:40.254759 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 11:36:40 crc kubenswrapper[4706]: I1125 11:36:40.998609 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:41 crc kubenswrapper[4706]: I1125 11:36:41.000697 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:41 crc kubenswrapper[4706]: I1125 11:36:41.000745 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:41 crc kubenswrapper[4706]: I1125 11:36:41.000765 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:41 crc kubenswrapper[4706]: I1125 11:36:41.446890 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 11:36:41 crc kubenswrapper[4706]: E1125 11:36:41.992966 4706 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 25 11:36:42 crc kubenswrapper[4706]: I1125 11:36:42.000725 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:42 crc kubenswrapper[4706]: I1125 11:36:42.001707 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:42 crc kubenswrapper[4706]: I1125 11:36:42.001738 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:42 crc kubenswrapper[4706]: I1125 11:36:42.001751 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:42 crc kubenswrapper[4706]: I1125 11:36:42.507061 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 25 11:36:42 crc kubenswrapper[4706]: I1125 11:36:42.507468 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:42 crc kubenswrapper[4706]: I1125 11:36:42.509144 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:42 crc kubenswrapper[4706]: I1125 11:36:42.509205 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:42 crc kubenswrapper[4706]: I1125 11:36:42.509228 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:43 crc kubenswrapper[4706]: I1125 11:36:43.003339 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:43 crc kubenswrapper[4706]: I1125 11:36:43.005372 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:43 crc kubenswrapper[4706]: I1125 11:36:43.005437 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:43 crc kubenswrapper[4706]: I1125 11:36:43.005461 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:43 crc kubenswrapper[4706]: I1125 11:36:43.007681 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 11:36:44 crc kubenswrapper[4706]: I1125 11:36:44.005002 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:44 crc kubenswrapper[4706]: I1125 11:36:44.006448 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:44 crc kubenswrapper[4706]: I1125 11:36:44.006494 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:44 crc kubenswrapper[4706]: I1125 11:36:44.006504 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:44 crc kubenswrapper[4706]: I1125 11:36:44.448100 4706 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 25 11:36:44 crc kubenswrapper[4706]: I1125 11:36:44.448203 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 11:36:45 crc kubenswrapper[4706]: W1125 11:36:45.427072 4706 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 25 11:36:45 crc kubenswrapper[4706]: I1125 11:36:45.427198 4706 trace.go:236] Trace[1241560675]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 11:36:35.425) (total time: 10001ms): Nov 25 11:36:45 crc kubenswrapper[4706]: Trace[1241560675]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:36:45.427) Nov 25 11:36:45 crc kubenswrapper[4706]: Trace[1241560675]: [10.001982079s] [10.001982079s] END Nov 25 11:36:45 crc kubenswrapper[4706]: E1125 11:36:45.427226 4706 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 25 11:36:45 crc kubenswrapper[4706]: W1125 11:36:45.864374 4706 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 25 11:36:45 crc kubenswrapper[4706]: I1125 11:36:45.864488 4706 trace.go:236] Trace[1730338953]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 11:36:35.862) (total time: 10001ms): Nov 25 11:36:45 crc kubenswrapper[4706]: Trace[1730338953]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:36:45.864) Nov 25 11:36:45 crc kubenswrapper[4706]: Trace[1730338953]: [10.001534709s] [10.001534709s] END Nov 25 11:36:45 crc kubenswrapper[4706]: E1125 11:36:45.864517 4706 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 25 11:36:45 crc kubenswrapper[4706]: I1125 11:36:45.866455 4706 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 25 11:36:46 crc kubenswrapper[4706]: I1125 11:36:46.261781 4706 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 25 11:36:46 crc kubenswrapper[4706]: I1125 11:36:46.261858 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 25 11:36:46 crc kubenswrapper[4706]: I1125 11:36:46.265281 4706 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 25 11:36:46 crc kubenswrapper[4706]: I1125 11:36:46.265358 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 25 11:36:48 crc kubenswrapper[4706]: I1125 11:36:48.619949 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 11:36:48 crc kubenswrapper[4706]: I1125 11:36:48.620264 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:48 crc kubenswrapper[4706]: I1125 11:36:48.622260 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:48 crc kubenswrapper[4706]: I1125 11:36:48.622343 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:48 crc kubenswrapper[4706]: I1125 11:36:48.622362 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:48 crc kubenswrapper[4706]: I1125 11:36:48.627361 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 11:36:49 crc kubenswrapper[4706]: I1125 11:36:49.021876 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:49 crc kubenswrapper[4706]: I1125 11:36:49.022927 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:49 crc kubenswrapper[4706]: I1125 11:36:49.022989 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:49 crc kubenswrapper[4706]: I1125 11:36:49.023006 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:50 crc kubenswrapper[4706]: I1125 11:36:50.440050 4706 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 25 11:36:51 crc kubenswrapper[4706]: E1125 11:36:51.258156 4706 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.260933 4706 trace.go:236] Trace[630643735]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 11:36:40.339) (total time: 10921ms): Nov 25 11:36:51 crc kubenswrapper[4706]: Trace[630643735]: ---"Objects listed" error: 10921ms (11:36:51.260) Nov 25 11:36:51 crc kubenswrapper[4706]: Trace[630643735]: [10.921433235s] [10.921433235s] END Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.260991 4706 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.261031 4706 trace.go:236] Trace[2147243886]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 11:36:38.954) (total time: 12306ms): Nov 25 11:36:51 crc kubenswrapper[4706]: Trace[2147243886]: ---"Objects listed" error: 12306ms (11:36:51.260) Nov 25 11:36:51 crc kubenswrapper[4706]: Trace[2147243886]: [12.306545998s] [12.306545998s] END Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.261065 4706 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.262028 4706 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 25 11:36:51 crc kubenswrapper[4706]: E1125 11:36:51.263347 4706 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.310658 4706 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.310733 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.311059 4706 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.311093 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.311226 4706 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.311255 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.451112 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.455562 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.857724 4706 apiserver.go:52] "Watching apiserver" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.862277 4706 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.862638 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.863076 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.863228 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.863350 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:36:51 crc kubenswrapper[4706]: E1125 11:36:51.863629 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:36:51 crc kubenswrapper[4706]: E1125 11:36:51.863646 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.864312 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:36:51 crc kubenswrapper[4706]: E1125 11:36:51.864377 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.864953 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.866535 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.867117 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.868016 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.868190 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.868210 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.868779 4706 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.871775 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.872063 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.872185 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.872323 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.872356 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.896521 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.911034 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.919699 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.930746 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.940859 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.954111 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.965096 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.967362 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.967431 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.967456 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.967482 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.967736 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.967769 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.967790 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.967813 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.967850 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.967858 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.967832 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.967938 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 11:36:51 crc kubenswrapper[4706]: E1125 11:36:51.967970 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:36:52.467935793 +0000 UTC m=+21.382493174 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.968011 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.968017 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.968069 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.968105 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.968132 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.968151 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.968170 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.968183 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.968188 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.968228 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.968255 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.968278 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.968283 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.968319 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.968343 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.968418 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.968564 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.968579 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.968633 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.968663 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.968685 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.968706 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.968728 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.968807 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.968839 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.969027 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.969064 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.969125 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.969144 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.969219 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.969285 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.969346 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.969374 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.969399 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.969424 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.969474 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.969499 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.969522 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.969548 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.969573 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.969597 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.969624 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.969648 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.969672 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.969698 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.969722 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.969774 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.969798 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.969822 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.969845 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.969867 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.969397 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.969888 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.969940 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.969974 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.969999 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.970926 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.970946 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.969623 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.969791 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.969825 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.969889 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.969999 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.970095 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.970193 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.970379 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.970378 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.970404 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.970577 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.970613 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.970869 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.970979 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.971243 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.971108 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.971273 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.971134 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.971281 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.971484 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.971647 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.971651 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.971012 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.971966 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.972182 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.972207 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.972224 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.972239 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.972260 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.972279 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.972308 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.972324 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.972342 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.972356 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.972372 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.972387 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.972405 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.972423 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.972437 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.972453 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.972574 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.972591 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.972608 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.972624 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.972640 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.972657 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.973111 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.973136 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.973454 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.973495 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.973524 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.973561 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.973596 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.973629 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.973655 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.973682 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.973710 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.973738 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.973762 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.973787 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.973812 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.973836 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.973867 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.973938 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.973957 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.973969 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.974071 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.974122 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.974164 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.974204 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.974247 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.974290 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.974344 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.974393 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.974437 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.974478 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.974510 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.974548 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.974586 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.974617 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.974645 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.974670 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.974690 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.974728 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.974753 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.974776 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.974802 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.974856 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.974881 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.974913 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.974947 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.974976 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.975006 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.975043 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.975091 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.975122 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.975150 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.975165 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.975178 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.975311 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.975391 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.975399 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.975599 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.975651 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.975681 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.975725 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.975746 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.975767 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.975850 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.975897 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.975921 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.976052 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.976110 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.976118 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.976134 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.976172 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.976196 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.976219 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.976239 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.976347 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.976434 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.976475 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.976513 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.976540 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.976560 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.976586 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.976657 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.976739 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.976753 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.976810 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.976970 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.976852 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.977278 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.977343 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.977370 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.977429 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.977453 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.977533 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.977588 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.977612 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.977658 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.977684 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.977726 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.977733 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.977768 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.977772 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.977853 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.977761 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.978055 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.978095 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.978120 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.978151 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.978173 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.978202 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.979629 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.978457 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.978733 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.981226 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.981238 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.978749 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.978899 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.979072 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.979151 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.979199 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.979244 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.979570 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.979727 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.979808 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.979880 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.980223 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.980618 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.980752 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.980813 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.980818 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.981544 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.981553 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.981888 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.981898 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.982097 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.982113 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.982274 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.982376 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.982409 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.982438 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.982446 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.982458 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.982481 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.982552 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.982564 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.982596 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.982625 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.982663 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.982685 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.982695 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.982733 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.982745 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.982778 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.982805 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.982880 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.982906 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.982941 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.982948 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.982967 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.982997 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.983024 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.983044 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.983045 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.983056 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.979845 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.983075 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.983136 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.983171 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.983260 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.983311 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.983339 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.983375 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.983409 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.983447 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.983473 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.980825 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.983136 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.983150 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.983158 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.983170 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.983375 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.983391 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.983481 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.983520 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.983534 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.983551 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.984401 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.983778 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.980181 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.983991 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.983994 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.984292 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.984506 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.984540 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.984616 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.984681 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.984700 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.984776 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.984811 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.984936 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.984948 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.984999 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.985039 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.985137 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.985160 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.985200 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.985228 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.985261 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.985724 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.985682 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.985723 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.985769 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.986035 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.986124 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.986211 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.986290 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.986449 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.986556 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.986581 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.986669 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.986767 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.987063 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.987507 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.987685 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.987873 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.988034 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.988065 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.988236 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.988332 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.988428 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.988570 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.988325 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.988577 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.988807 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: E1125 11:36:51.988859 4706 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.989131 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.989199 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 11:36:51 crc kubenswrapper[4706]: E1125 11:36:51.989285 4706 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 11:36:51 crc kubenswrapper[4706]: E1125 11:36:51.989421 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 11:36:52.489391566 +0000 UTC m=+21.403948947 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.989448 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.989459 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.989516 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.989814 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.989863 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.989905 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.989914 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.990458 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.990516 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.990885 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.990990 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: E1125 11:36:51.991126 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 11:36:52.491106731 +0000 UTC m=+21.405664112 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.991415 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.991691 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.993161 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.993425 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.993757 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.994272 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.994416 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.994626 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.994733 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.994884 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.994945 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.994953 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.994943 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.994887 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.994985 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995070 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995105 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995213 4706 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995227 4706 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995237 4706 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995246 4706 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995256 4706 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995267 4706 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995277 4706 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995288 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995313 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995324 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995334 4706 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995343 4706 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995352 4706 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995361 4706 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995371 4706 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995382 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995392 4706 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995403 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995413 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995422 4706 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995431 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995440 4706 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995449 4706 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995458 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995468 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995477 4706 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995486 4706 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995495 4706 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995504 4706 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995513 4706 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995521 4706 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995529 4706 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995543 4706 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995550 4706 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995290 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995525 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995559 4706 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995601 4706 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995612 4706 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995621 4706 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995629 4706 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995638 4706 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995647 4706 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995656 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995682 4706 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995695 4706 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995707 4706 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995718 4706 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995730 4706 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995742 4706 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995752 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995764 4706 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995772 4706 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995780 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995796 4706 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995805 4706 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995813 4706 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995823 4706 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995831 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995840 4706 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995849 4706 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995859 4706 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995867 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995875 4706 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995887 4706 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995895 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995888 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995903 4706 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.995985 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.996005 4706 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.996021 4706 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.996035 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.996050 4706 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.996063 4706 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.996164 4706 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.996195 4706 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.996210 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.996226 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.996241 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.996254 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.996268 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.996283 4706 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.996319 4706 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.996334 4706 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.996375 4706 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.996393 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.996408 4706 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.996421 4706 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.996460 4706 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.996474 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.996506 4706 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.996521 4706 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.996534 4706 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.996549 4706 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.996587 4706 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.998803 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.999102 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.999274 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:51 crc kubenswrapper[4706]: I1125 11:36:51.999861 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.000389 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.000436 4706 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.000599 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.000680 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.000754 4706 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.000827 4706 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.000895 4706 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001495 4706 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001522 4706 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001533 4706 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001546 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001557 4706 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001566 4706 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001575 4706 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001585 4706 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001595 4706 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001605 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001615 4706 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001626 4706 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001637 4706 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001648 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001658 4706 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001668 4706 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001678 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001688 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001698 4706 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001707 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001717 4706 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001728 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001769 4706 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001780 4706 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001791 4706 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001803 4706 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001811 4706 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001820 4706 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001829 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001839 4706 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001849 4706 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001859 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001869 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001878 4706 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001887 4706 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001898 4706 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001918 4706 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001932 4706 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001945 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001958 4706 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001970 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001984 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001997 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.002009 4706 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.002022 4706 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.002033 4706 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.002045 4706 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.002057 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.002068 4706 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.002081 4706 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.002567 4706 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.002584 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.002597 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.002611 4706 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.002623 4706 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.002636 4706 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.002648 4706 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.002661 4706 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.002672 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.002686 4706 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.002697 4706 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.002709 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.002723 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.002735 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.002747 4706 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.002759 4706 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.002770 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.002782 4706 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001002 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.000960 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.000715 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.001461 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:52 crc kubenswrapper[4706]: E1125 11:36:52.002796 4706 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 11:36:52 crc kubenswrapper[4706]: E1125 11:36:52.002915 4706 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 11:36:52 crc kubenswrapper[4706]: E1125 11:36:52.002933 4706 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 11:36:52 crc kubenswrapper[4706]: E1125 11:36:52.003011 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 11:36:52.502985666 +0000 UTC m=+21.417543247 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.003369 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.003695 4706 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.003866 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.010030 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.010324 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.014796 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.016656 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.016640 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.016744 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.016869 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:52 crc kubenswrapper[4706]: E1125 11:36:52.016896 4706 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 11:36:52 crc kubenswrapper[4706]: E1125 11:36:52.016926 4706 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 11:36:52 crc kubenswrapper[4706]: E1125 11:36:52.016943 4706 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 11:36:52 crc kubenswrapper[4706]: E1125 11:36:52.017032 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 11:36:52.517006826 +0000 UTC m=+21.431564387 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.017137 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.017411 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.017540 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.017727 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.017977 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.018012 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.018491 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.019089 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.024783 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.026181 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.028711 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.030364 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.031682 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.032802 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.034446 4706 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206" exitCode=255 Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.034523 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206"} Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.034469 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:36:52 crc kubenswrapper[4706]: E1125 11:36:52.040386 4706 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.040949 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.044853 4706 scope.go:117] "RemoveContainer" containerID="333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.045662 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.047254 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.053927 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.065577 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.078616 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.088457 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.099278 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.103270 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.103471 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.103566 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.103623 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.103742 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.103826 4706 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.104102 4706 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.104185 4706 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.104257 4706 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.104332 4706 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.104412 4706 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.104503 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.104581 4706 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.104656 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.104732 4706 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.104806 4706 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.104865 4706 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.104941 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.105012 4706 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.105082 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.105162 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.105217 4706 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.105316 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.105403 4706 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.105492 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.105565 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.105639 4706 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.105694 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.105802 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.105931 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.106015 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.106104 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.108372 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.119201 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.129934 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.153720 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.179901 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.181928 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.188651 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.196642 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.207065 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:52 crc kubenswrapper[4706]: W1125 11:36:52.219027 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-06ac09f345dd4fa3f5d0926206b1f3b8a20d0260d7ff54fcd5cea67b342fe2fa WatchSource:0}: Error finding container 06ac09f345dd4fa3f5d0926206b1f3b8a20d0260d7ff54fcd5cea67b342fe2fa: Status 404 returned error can't find the container with id 06ac09f345dd4fa3f5d0926206b1f3b8a20d0260d7ff54fcd5cea67b342fe2fa Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.223506 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.260616 4706 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.511755 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.511848 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.511889 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.511926 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:36:52 crc kubenswrapper[4706]: E1125 11:36:52.512057 4706 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 11:36:52 crc kubenswrapper[4706]: E1125 11:36:52.512125 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 11:36:53.512106759 +0000 UTC m=+22.426664140 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 11:36:52 crc kubenswrapper[4706]: E1125 11:36:52.512551 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:36:53.512541568 +0000 UTC m=+22.427098949 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:36:52 crc kubenswrapper[4706]: E1125 11:36:52.512635 4706 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 11:36:52 crc kubenswrapper[4706]: E1125 11:36:52.512648 4706 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 11:36:52 crc kubenswrapper[4706]: E1125 11:36:52.512660 4706 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 11:36:52 crc kubenswrapper[4706]: E1125 11:36:52.512684 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 11:36:53.512677751 +0000 UTC m=+22.427235132 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 11:36:52 crc kubenswrapper[4706]: E1125 11:36:52.512724 4706 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 11:36:52 crc kubenswrapper[4706]: E1125 11:36:52.512743 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 11:36:53.512738012 +0000 UTC m=+22.427295393 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.532202 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.544486 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.546171 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.547276 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.555346 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.567322 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.578925 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.590471 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.602388 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.612625 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:36:52 crc kubenswrapper[4706]: E1125 11:36:52.612878 4706 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 11:36:52 crc kubenswrapper[4706]: E1125 11:36:52.612934 4706 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 11:36:52 crc kubenswrapper[4706]: E1125 11:36:52.612950 4706 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 11:36:52 crc kubenswrapper[4706]: E1125 11:36:52.613046 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 11:36:53.613021261 +0000 UTC m=+22.527578812 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.615735 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.626632 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.637735 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.650439 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.665712 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.678646 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.690564 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.702103 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.723796 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.735080 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:52 crc kubenswrapper[4706]: I1125 11:36:52.748071 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.039443 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.041148 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e"} Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.041547 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.042045 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"06ac09f345dd4fa3f5d0926206b1f3b8a20d0260d7ff54fcd5cea67b342fe2fa"} Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.043261 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6"} Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.043312 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"b2f7f8de79daa0c55491e9d79c191144d6286e5658c163aa565ad09def569450"} Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.046395 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db"} Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.046437 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364"} Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.046450 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"621b1c17beeaa1c38be4c4f7c2565feab5ae065fbebdbe86c3820dd13a527cc2"} Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.072159 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:53Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.096063 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:53Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.111371 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:53Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.127482 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:53Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.140184 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:53Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.154708 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:53Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.182021 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:53Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.197680 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:53Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.214906 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:53Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.238715 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:53Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.253263 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:53Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.266558 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:53Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.288437 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:53Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.312294 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:53Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.341888 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:53Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.368853 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:53Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.392714 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:53Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.411015 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:53Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.521775 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.521930 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.521976 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.522004 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:36:53 crc kubenswrapper[4706]: E1125 11:36:53.522155 4706 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 11:36:53 crc kubenswrapper[4706]: E1125 11:36:53.522177 4706 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 11:36:53 crc kubenswrapper[4706]: E1125 11:36:53.522190 4706 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 11:36:53 crc kubenswrapper[4706]: E1125 11:36:53.522245 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 11:36:55.522226097 +0000 UTC m=+24.436783478 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 11:36:53 crc kubenswrapper[4706]: E1125 11:36:53.522678 4706 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 11:36:53 crc kubenswrapper[4706]: E1125 11:36:53.522711 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 11:36:55.522702267 +0000 UTC m=+24.437259648 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 11:36:53 crc kubenswrapper[4706]: E1125 11:36:53.522789 4706 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 11:36:53 crc kubenswrapper[4706]: E1125 11:36:53.522802 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:36:55.522793888 +0000 UTC m=+24.437351269 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:36:53 crc kubenswrapper[4706]: E1125 11:36:53.522940 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 11:36:55.522915401 +0000 UTC m=+24.437472782 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.623350 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:36:53 crc kubenswrapper[4706]: E1125 11:36:53.623555 4706 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 11:36:53 crc kubenswrapper[4706]: E1125 11:36:53.623577 4706 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 11:36:53 crc kubenswrapper[4706]: E1125 11:36:53.623590 4706 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 11:36:53 crc kubenswrapper[4706]: E1125 11:36:53.623659 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 11:36:55.623640959 +0000 UTC m=+24.538198340 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.876555 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-dhfpm"] Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.876945 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-nh9sc"] Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.877084 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-cjmvf"] Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.877669 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.878368 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.878634 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-nh9sc" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.879014 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-s47nr"] Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.879203 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-q9rpr"] Nov 25 11:36:53 crc kubenswrapper[4706]: W1125 11:36:53.879812 4706 reflector.go:561] object-"openshift-multus"/"default-cni-sysctl-allowlist": failed to list *v1.ConfigMap: configmaps "default-cni-sysctl-allowlist" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Nov 25 11:36:53 crc kubenswrapper[4706]: E1125 11:36:53.879852 4706 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"default-cni-sysctl-allowlist\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.880002 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.880010 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-s47nr" Nov 25 11:36:53 crc kubenswrapper[4706]: W1125 11:36:53.881417 4706 reflector.go:561] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": failed to list *v1.Secret: secrets "multus-ancillary-tools-dockercfg-vnmsz" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Nov 25 11:36:53 crc kubenswrapper[4706]: E1125 11:36:53.881450 4706 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-vnmsz\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"multus-ancillary-tools-dockercfg-vnmsz\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 11:36:53 crc kubenswrapper[4706]: W1125 11:36:53.881786 4706 reflector.go:561] object-"openshift-dns"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Nov 25 11:36:53 crc kubenswrapper[4706]: E1125 11:36:53.881816 4706 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 11:36:53 crc kubenswrapper[4706]: W1125 11:36:53.881848 4706 reflector.go:561] object-"openshift-multus"/"cni-copy-resources": failed to list *v1.ConfigMap: configmaps "cni-copy-resources" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Nov 25 11:36:53 crc kubenswrapper[4706]: E1125 11:36:53.881872 4706 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"cni-copy-resources\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cni-copy-resources\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 11:36:53 crc kubenswrapper[4706]: W1125 11:36:53.881938 4706 reflector.go:561] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Nov 25 11:36:53 crc kubenswrapper[4706]: E1125 11:36:53.881955 4706 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 11:36:53 crc kubenswrapper[4706]: W1125 11:36:53.884051 4706 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": failed to list *v1.Secret: secrets "machine-config-daemon-dockercfg-r5tcq" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Nov 25 11:36:53 crc kubenswrapper[4706]: E1125 11:36:53.884083 4706 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-r5tcq\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-config-daemon-dockercfg-r5tcq\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 11:36:53 crc kubenswrapper[4706]: W1125 11:36:53.884181 4706 reflector.go:561] object-"openshift-machine-config-operator"/"proxy-tls": failed to list *v1.Secret: secrets "proxy-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Nov 25 11:36:53 crc kubenswrapper[4706]: E1125 11:36:53.884212 4706 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"proxy-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.886317 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 25 11:36:53 crc kubenswrapper[4706]: W1125 11:36:53.886482 4706 reflector.go:561] object-"openshift-machine-config-operator"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Nov 25 11:36:53 crc kubenswrapper[4706]: E1125 11:36:53.886546 4706 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 11:36:53 crc kubenswrapper[4706]: W1125 11:36:53.886578 4706 reflector.go:561] object-"openshift-multus"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Nov 25 11:36:53 crc kubenswrapper[4706]: E1125 11:36:53.886601 4706 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.886665 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.886826 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.886866 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.886915 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.886990 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.886665 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.887107 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.887161 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.887284 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.887345 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.887420 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.893531 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.895384 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:53Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.915881 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:53Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.921211 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:36:53 crc kubenswrapper[4706]: E1125 11:36:53.921344 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.921466 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.921208 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:36:53 crc kubenswrapper[4706]: E1125 11:36:53.921704 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:36:53 crc kubenswrapper[4706]: E1125 11:36:53.921627 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.924821 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.925432 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-system-cni-dir\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.925470 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfqx4\" (UniqueName: \"kubernetes.io/projected/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-kube-api-access-wfqx4\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.925543 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-node-log\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.925572 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.925593 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/150b96fa-570a-4b32-a82a-3275127d5b51-tuning-conf-dir\") pod \"multus-additional-cni-plugins-cjmvf\" (UID: \"150b96fa-570a-4b32-a82a-3275127d5b51\") " pod="openshift-multus/multus-additional-cni-plugins-cjmvf" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.925616 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-hostroot\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.925648 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/150b96fa-570a-4b32-a82a-3275127d5b51-cnibin\") pod \"multus-additional-cni-plugins-cjmvf\" (UID: \"150b96fa-570a-4b32-a82a-3275127d5b51\") " pod="openshift-multus/multus-additional-cni-plugins-cjmvf" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.925685 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-run-ovn-kubernetes\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.925711 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-cni-bin\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.925732 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7813e79d-885d-4cf1-ac27-039e998473b7-hosts-file\") pod \"node-resolver-nh9sc\" (UID: \"7813e79d-885d-4cf1-ac27-039e998473b7\") " pod="openshift-dns/node-resolver-nh9sc" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.925754 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0930887a-320c-4506-8c9c-f94d6d64516a-mcd-auth-proxy-config\") pod \"machine-config-daemon-dhfpm\" (UID: \"0930887a-320c-4506-8c9c-f94d6d64516a\") " pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.925780 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-multus-daemon-config\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.925800 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-run-systemd\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.925820 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-etc-openvswitch\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.925841 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0930887a-320c-4506-8c9c-f94d6d64516a-proxy-tls\") pod \"machine-config-daemon-dhfpm\" (UID: \"0930887a-320c-4506-8c9c-f94d6d64516a\") " pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.925856 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-slash\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.925873 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.925896 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f1218bae-4153-4490-8847-ab2d07ca0ab6-env-overrides\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.925935 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-multus-cni-dir\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.926001 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-host-var-lib-kubelet\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.926054 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-etc-kubernetes\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.926088 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-kubelet\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.926121 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-cni-binary-copy\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.926151 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-multus-socket-dir-parent\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.926183 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-host-var-lib-cni-multus\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.926219 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f1218bae-4153-4490-8847-ab2d07ca0ab6-ovn-node-metrics-cert\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.926261 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/150b96fa-570a-4b32-a82a-3275127d5b51-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-cjmvf\" (UID: \"150b96fa-570a-4b32-a82a-3275127d5b51\") " pod="openshift-multus/multus-additional-cni-plugins-cjmvf" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.926341 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-cni-netd\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.926400 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-host-run-netns\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.926430 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-multus-conf-dir\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.926462 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-run-ovn\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.926494 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9gvf\" (UniqueName: \"kubernetes.io/projected/7813e79d-885d-4cf1-ac27-039e998473b7-kube-api-access-g9gvf\") pod \"node-resolver-nh9sc\" (UID: \"7813e79d-885d-4cf1-ac27-039e998473b7\") " pod="openshift-dns/node-resolver-nh9sc" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.926527 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-cnibin\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.926557 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-host-var-lib-cni-bin\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.926589 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/150b96fa-570a-4b32-a82a-3275127d5b51-system-cni-dir\") pod \"multus-additional-cni-plugins-cjmvf\" (UID: \"150b96fa-570a-4b32-a82a-3275127d5b51\") " pod="openshift-multus/multus-additional-cni-plugins-cjmvf" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.926636 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-run-netns\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.926668 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-run-openvswitch\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.926711 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0930887a-320c-4506-8c9c-f94d6d64516a-rootfs\") pod \"machine-config-daemon-dhfpm\" (UID: \"0930887a-320c-4506-8c9c-f94d6d64516a\") " pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.926747 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/150b96fa-570a-4b32-a82a-3275127d5b51-cni-binary-copy\") pod \"multus-additional-cni-plugins-cjmvf\" (UID: \"150b96fa-570a-4b32-a82a-3275127d5b51\") " pod="openshift-multus/multus-additional-cni-plugins-cjmvf" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.926781 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/150b96fa-570a-4b32-a82a-3275127d5b51-os-release\") pod \"multus-additional-cni-plugins-cjmvf\" (UID: \"150b96fa-570a-4b32-a82a-3275127d5b51\") " pod="openshift-multus/multus-additional-cni-plugins-cjmvf" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.926827 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f1218bae-4153-4490-8847-ab2d07ca0ab6-ovnkube-script-lib\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.926858 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b55sf\" (UniqueName: \"kubernetes.io/projected/f1218bae-4153-4490-8847-ab2d07ca0ab6-kube-api-access-b55sf\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.926888 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7sgt\" (UniqueName: \"kubernetes.io/projected/0930887a-320c-4506-8c9c-f94d6d64516a-kube-api-access-g7sgt\") pod \"machine-config-daemon-dhfpm\" (UID: \"0930887a-320c-4506-8c9c-f94d6d64516a\") " pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.926915 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-host-run-multus-certs\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.926941 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-systemd-units\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.926971 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-log-socket\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.926998 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f1218bae-4153-4490-8847-ab2d07ca0ab6-ovnkube-config\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.927029 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-os-release\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.927043 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.927058 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-host-run-k8s-cni-cncf-io\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.927110 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-var-lib-openvswitch\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.927142 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2ml6\" (UniqueName: \"kubernetes.io/projected/150b96fa-570a-4b32-a82a-3275127d5b51-kube-api-access-d2ml6\") pod \"multus-additional-cni-plugins-cjmvf\" (UID: \"150b96fa-570a-4b32-a82a-3275127d5b51\") " pod="openshift-multus/multus-additional-cni-plugins-cjmvf" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.927653 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.928646 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.929227 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.930085 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.930950 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:53Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.931525 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.932459 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.935853 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.936677 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.937874 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.938696 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.939685 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.940217 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.941106 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.941691 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.942092 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.943131 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.943796 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.944351 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.945172 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:53Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.945556 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.945986 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.947003 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.947474 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.948723 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.949407 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.949866 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.952087 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.952607 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.953086 4706 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.953190 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.954549 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.956445 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.956566 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:53Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.956973 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.958198 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.958905 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.960476 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.961286 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.962423 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.962960 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.964030 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.964784 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.965832 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.966323 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.967267 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.968044 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.969419 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:53Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.969538 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.970149 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.971126 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.971699 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.972348 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.973313 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.973761 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 25 11:36:53 crc kubenswrapper[4706]: I1125 11:36:53.987792 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:53Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.003365 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:53Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.019529 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:54Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.028259 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-run-ovn-kubernetes\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.028307 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-cni-bin\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.028330 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7813e79d-885d-4cf1-ac27-039e998473b7-hosts-file\") pod \"node-resolver-nh9sc\" (UID: \"7813e79d-885d-4cf1-ac27-039e998473b7\") " pod="openshift-dns/node-resolver-nh9sc" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.028351 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-multus-daemon-config\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.028367 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-run-systemd\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.028382 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-etc-openvswitch\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.028396 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0930887a-320c-4506-8c9c-f94d6d64516a-proxy-tls\") pod \"machine-config-daemon-dhfpm\" (UID: \"0930887a-320c-4506-8c9c-f94d6d64516a\") " pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.028399 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-run-ovn-kubernetes\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.028441 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-etc-openvswitch\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.028410 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0930887a-320c-4506-8c9c-f94d6d64516a-mcd-auth-proxy-config\") pod \"machine-config-daemon-dhfpm\" (UID: \"0930887a-320c-4506-8c9c-f94d6d64516a\") " pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.028484 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-run-systemd\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.028520 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-slash\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.028577 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-multus-cni-dir\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.028603 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-host-var-lib-kubelet\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.028626 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-etc-kubernetes\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.028636 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-host-var-lib-kubelet\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.028604 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-slash\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.028650 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-kubelet\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.028690 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-kubelet\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.028567 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7813e79d-885d-4cf1-ac27-039e998473b7-hosts-file\") pod \"node-resolver-nh9sc\" (UID: \"7813e79d-885d-4cf1-ac27-039e998473b7\") " pod="openshift-dns/node-resolver-nh9sc" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.028703 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-etc-kubernetes\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.028758 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.028712 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.028830 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f1218bae-4153-4490-8847-ab2d07ca0ab6-env-overrides\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.028852 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-multus-cni-dir\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.028857 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-cni-binary-copy\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.028883 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-multus-socket-dir-parent\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.028907 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-host-var-lib-cni-multus\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.028928 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f1218bae-4153-4490-8847-ab2d07ca0ab6-ovn-node-metrics-cert\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.028958 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/150b96fa-570a-4b32-a82a-3275127d5b51-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-cjmvf\" (UID: \"150b96fa-570a-4b32-a82a-3275127d5b51\") " pod="openshift-multus/multus-additional-cni-plugins-cjmvf" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.028983 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-cni-netd\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029016 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-host-var-lib-cni-multus\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029029 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-host-run-netns\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029043 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-multus-daemon-config\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029054 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-multus-conf-dir\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029079 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-run-ovn\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029107 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-host-run-netns\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029109 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9gvf\" (UniqueName: \"kubernetes.io/projected/7813e79d-885d-4cf1-ac27-039e998473b7-kube-api-access-g9gvf\") pod \"node-resolver-nh9sc\" (UID: \"7813e79d-885d-4cf1-ac27-039e998473b7\") " pod="openshift-dns/node-resolver-nh9sc" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029182 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-cnibin\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029213 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-host-var-lib-cni-bin\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029264 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-run-netns\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029314 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-run-openvswitch\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029344 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f1218bae-4153-4490-8847-ab2d07ca0ab6-env-overrides\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029350 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0930887a-320c-4506-8c9c-f94d6d64516a-rootfs\") pod \"machine-config-daemon-dhfpm\" (UID: \"0930887a-320c-4506-8c9c-f94d6d64516a\") " pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029381 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0930887a-320c-4506-8c9c-f94d6d64516a-rootfs\") pod \"machine-config-daemon-dhfpm\" (UID: \"0930887a-320c-4506-8c9c-f94d6d64516a\") " pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029404 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/150b96fa-570a-4b32-a82a-3275127d5b51-system-cni-dir\") pod \"multus-additional-cni-plugins-cjmvf\" (UID: \"150b96fa-570a-4b32-a82a-3275127d5b51\") " pod="openshift-multus/multus-additional-cni-plugins-cjmvf" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029425 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/150b96fa-570a-4b32-a82a-3275127d5b51-cni-binary-copy\") pod \"multus-additional-cni-plugins-cjmvf\" (UID: \"150b96fa-570a-4b32-a82a-3275127d5b51\") " pod="openshift-multus/multus-additional-cni-plugins-cjmvf" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029463 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-cnibin\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029462 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f1218bae-4153-4490-8847-ab2d07ca0ab6-ovnkube-script-lib\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029502 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-multus-conf-dir\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029511 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b55sf\" (UniqueName: \"kubernetes.io/projected/f1218bae-4153-4490-8847-ab2d07ca0ab6-kube-api-access-b55sf\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029546 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-run-ovn\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029079 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-multus-socket-dir-parent\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029687 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-cni-bin\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029709 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/150b96fa-570a-4b32-a82a-3275127d5b51-system-cni-dir\") pod \"multus-additional-cni-plugins-cjmvf\" (UID: \"150b96fa-570a-4b32-a82a-3275127d5b51\") " pod="openshift-multus/multus-additional-cni-plugins-cjmvf" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029426 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-cni-netd\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029542 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7sgt\" (UniqueName: \"kubernetes.io/projected/0930887a-320c-4506-8c9c-f94d6d64516a-kube-api-access-g7sgt\") pod \"machine-config-daemon-dhfpm\" (UID: \"0930887a-320c-4506-8c9c-f94d6d64516a\") " pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029751 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-run-openvswitch\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029772 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/150b96fa-570a-4b32-a82a-3275127d5b51-os-release\") pod \"multus-additional-cni-plugins-cjmvf\" (UID: \"150b96fa-570a-4b32-a82a-3275127d5b51\") " pod="openshift-multus/multus-additional-cni-plugins-cjmvf" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029789 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-run-netns\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029805 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-systemd-units\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029827 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-log-socket\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029849 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f1218bae-4153-4490-8847-ab2d07ca0ab6-ovnkube-config\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029872 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-host-run-multus-certs\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029895 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-os-release\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029916 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-host-run-k8s-cni-cncf-io\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029939 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2ml6\" (UniqueName: \"kubernetes.io/projected/150b96fa-570a-4b32-a82a-3275127d5b51-kube-api-access-d2ml6\") pod \"multus-additional-cni-plugins-cjmvf\" (UID: \"150b96fa-570a-4b32-a82a-3275127d5b51\") " pod="openshift-multus/multus-additional-cni-plugins-cjmvf" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029960 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f1218bae-4153-4490-8847-ab2d07ca0ab6-ovnkube-script-lib\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029964 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-var-lib-openvswitch\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.029988 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-var-lib-openvswitch\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.030009 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-system-cni-dir\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.030015 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-systemd-units\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.030032 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfqx4\" (UniqueName: \"kubernetes.io/projected/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-kube-api-access-wfqx4\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.030038 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-host-run-multus-certs\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.030055 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-node-log\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.030065 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-os-release\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.030044 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/150b96fa-570a-4b32-a82a-3275127d5b51-os-release\") pod \"multus-additional-cni-plugins-cjmvf\" (UID: \"150b96fa-570a-4b32-a82a-3275127d5b51\") " pod="openshift-multus/multus-additional-cni-plugins-cjmvf" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.030095 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/150b96fa-570a-4b32-a82a-3275127d5b51-tuning-conf-dir\") pod \"multus-additional-cni-plugins-cjmvf\" (UID: \"150b96fa-570a-4b32-a82a-3275127d5b51\") " pod="openshift-multus/multus-additional-cni-plugins-cjmvf" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.030139 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-host-var-lib-cni-bin\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.030210 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-log-socket\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.030222 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-system-cni-dir\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.030232 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-host-run-k8s-cni-cncf-io\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.030244 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-node-log\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.030253 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-hostroot\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.030282 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/150b96fa-570a-4b32-a82a-3275127d5b51-cnibin\") pod \"multus-additional-cni-plugins-cjmvf\" (UID: \"150b96fa-570a-4b32-a82a-3275127d5b51\") " pod="openshift-multus/multus-additional-cni-plugins-cjmvf" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.030343 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-hostroot\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.030485 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/150b96fa-570a-4b32-a82a-3275127d5b51-cnibin\") pod \"multus-additional-cni-plugins-cjmvf\" (UID: \"150b96fa-570a-4b32-a82a-3275127d5b51\") " pod="openshift-multus/multus-additional-cni-plugins-cjmvf" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.030878 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f1218bae-4153-4490-8847-ab2d07ca0ab6-ovnkube-config\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.030937 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/150b96fa-570a-4b32-a82a-3275127d5b51-tuning-conf-dir\") pod \"multus-additional-cni-plugins-cjmvf\" (UID: \"150b96fa-570a-4b32-a82a-3275127d5b51\") " pod="openshift-multus/multus-additional-cni-plugins-cjmvf" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.038826 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:54Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.039045 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f1218bae-4153-4490-8847-ab2d07ca0ab6-ovn-node-metrics-cert\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.056619 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:54Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.060917 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b55sf\" (UniqueName: \"kubernetes.io/projected/f1218bae-4153-4490-8847-ab2d07ca0ab6-kube-api-access-b55sf\") pod \"ovnkube-node-q9rpr\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.069576 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:54Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.084217 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:54Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.096118 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:54Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.108710 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:54Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.125876 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:54Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.150381 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:54Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.168860 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:54Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.181992 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:54Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.195508 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:54Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.217776 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:54Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.229632 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:36:54 crc kubenswrapper[4706]: W1125 11:36:54.243583 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1218bae_4153_4490_8847_ab2d07ca0ab6.slice/crio-d4c2fd5e63390b82da0cc1d6cff993551805081effa000d965be7b08e4c5e95c WatchSource:0}: Error finding container d4c2fd5e63390b82da0cc1d6cff993551805081effa000d965be7b08e4c5e95c: Status 404 returned error can't find the container with id d4c2fd5e63390b82da0cc1d6cff993551805081effa000d965be7b08e4c5e95c Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.244786 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:54Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.283374 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:54Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.315994 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:54Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.329566 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:54Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.346558 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:54Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.921584 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.931866 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0930887a-320c-4506-8c9c-f94d6d64516a-proxy-tls\") pod \"machine-config-daemon-dhfpm\" (UID: \"0930887a-320c-4506-8c9c-f94d6d64516a\") " pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.952806 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.959929 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/150b96fa-570a-4b32-a82a-3275127d5b51-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-cjmvf\" (UID: \"150b96fa-570a-4b32-a82a-3275127d5b51\") " pod="openshift-multus/multus-additional-cni-plugins-cjmvf" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.985418 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 25 11:36:54 crc kubenswrapper[4706]: I1125 11:36:54.989664 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7sgt\" (UniqueName: \"kubernetes.io/projected/0930887a-320c-4506-8c9c-f94d6d64516a-kube-api-access-g7sgt\") pod \"machine-config-daemon-dhfpm\" (UID: \"0930887a-320c-4506-8c9c-f94d6d64516a\") " pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.017136 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.019946 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0930887a-320c-4506-8c9c-f94d6d64516a-mcd-auth-proxy-config\") pod \"machine-config-daemon-dhfpm\" (UID: \"0930887a-320c-4506-8c9c-f94d6d64516a\") " pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" Nov 25 11:36:55 crc kubenswrapper[4706]: E1125 11:36:55.029875 4706 configmap.go:193] Couldn't get configMap openshift-multus/cni-copy-resources: failed to sync configmap cache: timed out waiting for the condition Nov 25 11:36:55 crc kubenswrapper[4706]: E1125 11:36:55.029932 4706 configmap.go:193] Couldn't get configMap openshift-multus/cni-copy-resources: failed to sync configmap cache: timed out waiting for the condition Nov 25 11:36:55 crc kubenswrapper[4706]: E1125 11:36:55.029996 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/150b96fa-570a-4b32-a82a-3275127d5b51-cni-binary-copy podName:150b96fa-570a-4b32-a82a-3275127d5b51 nodeName:}" failed. No retries permitted until 2025-11-25 11:36:55.529967929 +0000 UTC m=+24.444525310 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-binary-copy" (UniqueName: "kubernetes.io/configmap/150b96fa-570a-4b32-a82a-3275127d5b51-cni-binary-copy") pod "multus-additional-cni-plugins-cjmvf" (UID: "150b96fa-570a-4b32-a82a-3275127d5b51") : failed to sync configmap cache: timed out waiting for the condition Nov 25 11:36:55 crc kubenswrapper[4706]: E1125 11:36:55.030048 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-cni-binary-copy podName:9912058e-28f5-4cec-9eeb-03e37e0dc5c1 nodeName:}" failed. No retries permitted until 2025-11-25 11:36:55.5300166 +0000 UTC m=+24.444574161 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-binary-copy" (UniqueName: "kubernetes.io/configmap/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-cni-binary-copy") pod "multus-s47nr" (UID: "9912058e-28f5-4cec-9eeb-03e37e0dc5c1") : failed to sync configmap cache: timed out waiting for the condition Nov 25 11:36:55 crc kubenswrapper[4706]: E1125 11:36:55.046369 4706 projected.go:288] Couldn't get configMap openshift-dns/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 25 11:36:55 crc kubenswrapper[4706]: E1125 11:36:55.046456 4706 projected.go:194] Error preparing data for projected volume kube-api-access-g9gvf for pod openshift-dns/node-resolver-nh9sc: failed to sync configmap cache: timed out waiting for the condition Nov 25 11:36:55 crc kubenswrapper[4706]: E1125 11:36:55.046447 4706 projected.go:288] Couldn't get configMap openshift-multus/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 25 11:36:55 crc kubenswrapper[4706]: E1125 11:36:55.046546 4706 projected.go:194] Error preparing data for projected volume kube-api-access-d2ml6 for pod openshift-multus/multus-additional-cni-plugins-cjmvf: failed to sync configmap cache: timed out waiting for the condition Nov 25 11:36:55 crc kubenswrapper[4706]: E1125 11:36:55.046560 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7813e79d-885d-4cf1-ac27-039e998473b7-kube-api-access-g9gvf podName:7813e79d-885d-4cf1-ac27-039e998473b7 nodeName:}" failed. No retries permitted until 2025-11-25 11:36:55.5465207 +0000 UTC m=+24.461078081 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-g9gvf" (UniqueName: "kubernetes.io/projected/7813e79d-885d-4cf1-ac27-039e998473b7-kube-api-access-g9gvf") pod "node-resolver-nh9sc" (UID: "7813e79d-885d-4cf1-ac27-039e998473b7") : failed to sync configmap cache: timed out waiting for the condition Nov 25 11:36:55 crc kubenswrapper[4706]: E1125 11:36:55.046631 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/150b96fa-570a-4b32-a82a-3275127d5b51-kube-api-access-d2ml6 podName:150b96fa-570a-4b32-a82a-3275127d5b51 nodeName:}" failed. No retries permitted until 2025-11-25 11:36:55.546606142 +0000 UTC m=+24.461163523 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d2ml6" (UniqueName: "kubernetes.io/projected/150b96fa-570a-4b32-a82a-3275127d5b51-kube-api-access-d2ml6") pod "multus-additional-cni-plugins-cjmvf" (UID: "150b96fa-570a-4b32-a82a-3275127d5b51") : failed to sync configmap cache: timed out waiting for the condition Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.047625 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 25 11:36:55 crc kubenswrapper[4706]: E1125 11:36:55.050040 4706 projected.go:288] Couldn't get configMap openshift-multus/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 25 11:36:55 crc kubenswrapper[4706]: E1125 11:36:55.050078 4706 projected.go:194] Error preparing data for projected volume kube-api-access-wfqx4 for pod openshift-multus/multus-s47nr: failed to sync configmap cache: timed out waiting for the condition Nov 25 11:36:55 crc kubenswrapper[4706]: E1125 11:36:55.050120 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-kube-api-access-wfqx4 podName:9912058e-28f5-4cec-9eeb-03e37e0dc5c1 nodeName:}" failed. No retries permitted until 2025-11-25 11:36:55.550109344 +0000 UTC m=+24.464666725 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wfqx4" (UniqueName: "kubernetes.io/projected/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-kube-api-access-wfqx4") pod "multus-s47nr" (UID: "9912058e-28f5-4cec-9eeb-03e37e0dc5c1") : failed to sync configmap cache: timed out waiting for the condition Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.051157 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995"} Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.052566 4706 generic.go:334] "Generic (PLEG): container finished" podID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerID="56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa" exitCode=0 Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.052634 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" event={"ID":"f1218bae-4153-4490-8847-ab2d07ca0ab6","Type":"ContainerDied","Data":"56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa"} Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.052686 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" event={"ID":"f1218bae-4153-4490-8847-ab2d07ca0ab6","Type":"ContainerStarted","Data":"d4c2fd5e63390b82da0cc1d6cff993551805081effa000d965be7b08e4c5e95c"} Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.065541 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:55Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.088151 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:55Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.101132 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:55Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.116824 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:55Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.129963 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:55Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.143327 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:55Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.155977 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.157362 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:55Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.159684 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" Nov 25 11:36:55 crc kubenswrapper[4706]: W1125 11:36:55.174261 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0930887a_320c_4506_8c9c_f94d6d64516a.slice/crio-943c9e7225dcab032da60559b0dc3665dba73db71fb2f3e0238f098045af9edb WatchSource:0}: Error finding container 943c9e7225dcab032da60559b0dc3665dba73db71fb2f3e0238f098045af9edb: Status 404 returned error can't find the container with id 943c9e7225dcab032da60559b0dc3665dba73db71fb2f3e0238f098045af9edb Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.181738 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:55Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.195272 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:55Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.209473 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:55Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.224150 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:55Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.235284 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:55Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.242538 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.247212 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:55Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.248183 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.261424 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:55Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.277851 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:55Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.292893 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:55Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.296579 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.305550 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:55Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.317184 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:55Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.334803 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:55Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.351376 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:55Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.368608 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:55Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.385818 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:55Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.401545 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:55Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.424959 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:55Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.440670 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:55Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.458041 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:55Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.473335 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:55Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.494582 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:55Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.544877 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.545005 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.545034 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-cni-binary-copy\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.545072 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:36:55 crc kubenswrapper[4706]: E1125 11:36:55.545110 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:36:59.545072935 +0000 UTC m=+28.459630376 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.545170 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:36:55 crc kubenswrapper[4706]: E1125 11:36:55.545203 4706 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.545238 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/150b96fa-570a-4b32-a82a-3275127d5b51-cni-binary-copy\") pod \"multus-additional-cni-plugins-cjmvf\" (UID: \"150b96fa-570a-4b32-a82a-3275127d5b51\") " pod="openshift-multus/multus-additional-cni-plugins-cjmvf" Nov 25 11:36:55 crc kubenswrapper[4706]: E1125 11:36:55.545240 4706 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 11:36:55 crc kubenswrapper[4706]: E1125 11:36:55.545331 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 11:36:59.545288449 +0000 UTC m=+28.459846000 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 11:36:55 crc kubenswrapper[4706]: E1125 11:36:55.545356 4706 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 11:36:55 crc kubenswrapper[4706]: E1125 11:36:55.545373 4706 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 11:36:55 crc kubenswrapper[4706]: E1125 11:36:55.545425 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 11:36:59.545415232 +0000 UTC m=+28.459972803 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 11:36:55 crc kubenswrapper[4706]: E1125 11:36:55.545315 4706 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 11:36:55 crc kubenswrapper[4706]: E1125 11:36:55.545464 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 11:36:59.545456753 +0000 UTC m=+28.460014364 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.545955 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/150b96fa-570a-4b32-a82a-3275127d5b51-cni-binary-copy\") pod \"multus-additional-cni-plugins-cjmvf\" (UID: \"150b96fa-570a-4b32-a82a-3275127d5b51\") " pod="openshift-multus/multus-additional-cni-plugins-cjmvf" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.546060 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-cni-binary-copy\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.646680 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2ml6\" (UniqueName: \"kubernetes.io/projected/150b96fa-570a-4b32-a82a-3275127d5b51-kube-api-access-d2ml6\") pod \"multus-additional-cni-plugins-cjmvf\" (UID: \"150b96fa-570a-4b32-a82a-3275127d5b51\") " pod="openshift-multus/multus-additional-cni-plugins-cjmvf" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.646721 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfqx4\" (UniqueName: \"kubernetes.io/projected/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-kube-api-access-wfqx4\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.646755 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9gvf\" (UniqueName: \"kubernetes.io/projected/7813e79d-885d-4cf1-ac27-039e998473b7-kube-api-access-g9gvf\") pod \"node-resolver-nh9sc\" (UID: \"7813e79d-885d-4cf1-ac27-039e998473b7\") " pod="openshift-dns/node-resolver-nh9sc" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.646790 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:36:55 crc kubenswrapper[4706]: E1125 11:36:55.646926 4706 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 11:36:55 crc kubenswrapper[4706]: E1125 11:36:55.646943 4706 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 11:36:55 crc kubenswrapper[4706]: E1125 11:36:55.646956 4706 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 11:36:55 crc kubenswrapper[4706]: E1125 11:36:55.647002 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 11:36:59.646988657 +0000 UTC m=+28.561546038 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.652372 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9gvf\" (UniqueName: \"kubernetes.io/projected/7813e79d-885d-4cf1-ac27-039e998473b7-kube-api-access-g9gvf\") pod \"node-resolver-nh9sc\" (UID: \"7813e79d-885d-4cf1-ac27-039e998473b7\") " pod="openshift-dns/node-resolver-nh9sc" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.652372 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2ml6\" (UniqueName: \"kubernetes.io/projected/150b96fa-570a-4b32-a82a-3275127d5b51-kube-api-access-d2ml6\") pod \"multus-additional-cni-plugins-cjmvf\" (UID: \"150b96fa-570a-4b32-a82a-3275127d5b51\") " pod="openshift-multus/multus-additional-cni-plugins-cjmvf" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.652459 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfqx4\" (UniqueName: \"kubernetes.io/projected/9912058e-28f5-4cec-9eeb-03e37e0dc5c1-kube-api-access-wfqx4\") pod \"multus-s47nr\" (UID: \"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\") " pod="openshift-multus/multus-s47nr" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.692212 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.711908 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-nh9sc" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.721129 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-s47nr" Nov 25 11:36:55 crc kubenswrapper[4706]: W1125 11:36:55.729380 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7813e79d_885d_4cf1_ac27_039e998473b7.slice/crio-94479f1bfa8de17d61e5ffbdc4eaa2fcee3b25cfa413aa7667d59a37d7f3f9ce WatchSource:0}: Error finding container 94479f1bfa8de17d61e5ffbdc4eaa2fcee3b25cfa413aa7667d59a37d7f3f9ce: Status 404 returned error can't find the container with id 94479f1bfa8de17d61e5ffbdc4eaa2fcee3b25cfa413aa7667d59a37d7f3f9ce Nov 25 11:36:55 crc kubenswrapper[4706]: W1125 11:36:55.735736 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9912058e_28f5_4cec_9eeb_03e37e0dc5c1.slice/crio-60f28fdc4097977cc6a247cc4253724340417e9e454714caadd24226ee7e3c73 WatchSource:0}: Error finding container 60f28fdc4097977cc6a247cc4253724340417e9e454714caadd24226ee7e3c73: Status 404 returned error can't find the container with id 60f28fdc4097977cc6a247cc4253724340417e9e454714caadd24226ee7e3c73 Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.921389 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.921430 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:36:55 crc kubenswrapper[4706]: E1125 11:36:55.921531 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:36:55 crc kubenswrapper[4706]: E1125 11:36:55.921688 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:36:55 crc kubenswrapper[4706]: I1125 11:36:55.921780 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:36:55 crc kubenswrapper[4706]: E1125 11:36:55.921938 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.058678 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" event={"ID":"0930887a-320c-4506-8c9c-f94d6d64516a","Type":"ContainerStarted","Data":"736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884"} Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.058733 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" event={"ID":"0930887a-320c-4506-8c9c-f94d6d64516a","Type":"ContainerStarted","Data":"86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38"} Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.058757 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" event={"ID":"0930887a-320c-4506-8c9c-f94d6d64516a","Type":"ContainerStarted","Data":"943c9e7225dcab032da60559b0dc3665dba73db71fb2f3e0238f098045af9edb"} Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.061379 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-s47nr" event={"ID":"9912058e-28f5-4cec-9eeb-03e37e0dc5c1","Type":"ContainerStarted","Data":"d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4"} Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.061418 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-s47nr" event={"ID":"9912058e-28f5-4cec-9eeb-03e37e0dc5c1","Type":"ContainerStarted","Data":"60f28fdc4097977cc6a247cc4253724340417e9e454714caadd24226ee7e3c73"} Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.063643 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-nh9sc" event={"ID":"7813e79d-885d-4cf1-ac27-039e998473b7","Type":"ContainerStarted","Data":"ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b"} Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.063669 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-nh9sc" event={"ID":"7813e79d-885d-4cf1-ac27-039e998473b7","Type":"ContainerStarted","Data":"94479f1bfa8de17d61e5ffbdc4eaa2fcee3b25cfa413aa7667d59a37d7f3f9ce"} Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.065281 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" event={"ID":"150b96fa-570a-4b32-a82a-3275127d5b51","Type":"ContainerStarted","Data":"f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6"} Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.065346 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" event={"ID":"150b96fa-570a-4b32-a82a-3275127d5b51","Type":"ContainerStarted","Data":"62cd1a0d573a6b22873e77b7fbaee93da9472ecbd303ff87b90c19b3e3aeb2b3"} Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.069275 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" event={"ID":"f1218bae-4153-4490-8847-ab2d07ca0ab6","Type":"ContainerStarted","Data":"ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe"} Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.069344 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" event={"ID":"f1218bae-4153-4490-8847-ab2d07ca0ab6","Type":"ContainerStarted","Data":"86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48"} Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.069364 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" event={"ID":"f1218bae-4153-4490-8847-ab2d07ca0ab6","Type":"ContainerStarted","Data":"e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7"} Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.069377 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" event={"ID":"f1218bae-4153-4490-8847-ab2d07ca0ab6","Type":"ContainerStarted","Data":"da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e"} Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.069388 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" event={"ID":"f1218bae-4153-4490-8847-ab2d07ca0ab6","Type":"ContainerStarted","Data":"f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0"} Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.069400 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" event={"ID":"f1218bae-4153-4490-8847-ab2d07ca0ab6","Type":"ContainerStarted","Data":"96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96"} Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.072797 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:56Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.086094 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:56Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.100165 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:56Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.112425 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:56Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.126597 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:56Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.138100 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:56Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.151177 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:56Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.169232 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:56Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.188350 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:56Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.202060 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:56Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.216361 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:56Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.230253 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:56Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.242989 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:56Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.253620 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:56Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.268064 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:56Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.279778 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:56Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.290550 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:56Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.300722 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:56Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.314596 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:56Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.328842 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:56Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.348435 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:56Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.369824 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:56Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.412445 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:56Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.450260 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:56Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.488850 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:56Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.520397 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-lpc7s"] Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.521044 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lpc7s" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.534909 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:56Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.541714 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.563363 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.580954 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.600114 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.653012 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:56Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.655470 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3ec2e656-a68d-4339-92d5-0c157f7f7783-host\") pod \"node-ca-lpc7s\" (UID: \"3ec2e656-a68d-4339-92d5-0c157f7f7783\") " pod="openshift-image-registry/node-ca-lpc7s" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.655515 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3ec2e656-a68d-4339-92d5-0c157f7f7783-serviceca\") pod \"node-ca-lpc7s\" (UID: \"3ec2e656-a68d-4339-92d5-0c157f7f7783\") " pod="openshift-image-registry/node-ca-lpc7s" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.655732 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w54mf\" (UniqueName: \"kubernetes.io/projected/3ec2e656-a68d-4339-92d5-0c157f7f7783-kube-api-access-w54mf\") pod \"node-ca-lpc7s\" (UID: \"3ec2e656-a68d-4339-92d5-0c157f7f7783\") " pod="openshift-image-registry/node-ca-lpc7s" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.692668 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:56Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.733460 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:56Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.756521 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w54mf\" (UniqueName: \"kubernetes.io/projected/3ec2e656-a68d-4339-92d5-0c157f7f7783-kube-api-access-w54mf\") pod \"node-ca-lpc7s\" (UID: \"3ec2e656-a68d-4339-92d5-0c157f7f7783\") " pod="openshift-image-registry/node-ca-lpc7s" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.756613 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3ec2e656-a68d-4339-92d5-0c157f7f7783-host\") pod \"node-ca-lpc7s\" (UID: \"3ec2e656-a68d-4339-92d5-0c157f7f7783\") " pod="openshift-image-registry/node-ca-lpc7s" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.756637 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3ec2e656-a68d-4339-92d5-0c157f7f7783-serviceca\") pod \"node-ca-lpc7s\" (UID: \"3ec2e656-a68d-4339-92d5-0c157f7f7783\") " pod="openshift-image-registry/node-ca-lpc7s" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.756900 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3ec2e656-a68d-4339-92d5-0c157f7f7783-host\") pod \"node-ca-lpc7s\" (UID: \"3ec2e656-a68d-4339-92d5-0c157f7f7783\") " pod="openshift-image-registry/node-ca-lpc7s" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.758173 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3ec2e656-a68d-4339-92d5-0c157f7f7783-serviceca\") pod \"node-ca-lpc7s\" (UID: \"3ec2e656-a68d-4339-92d5-0c157f7f7783\") " pod="openshift-image-registry/node-ca-lpc7s" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.768611 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:56Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.797155 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w54mf\" (UniqueName: \"kubernetes.io/projected/3ec2e656-a68d-4339-92d5-0c157f7f7783-kube-api-access-w54mf\") pod \"node-ca-lpc7s\" (UID: \"3ec2e656-a68d-4339-92d5-0c157f7f7783\") " pod="openshift-image-registry/node-ca-lpc7s" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.811575 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lpc7s" Nov 25 11:36:56 crc kubenswrapper[4706]: W1125 11:36:56.830412 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ec2e656_a68d_4339_92d5_0c157f7f7783.slice/crio-023c5fac597fe305da5823d9c94ae59a58fa1c5d4b2ff6a9fb79fd1195e7cc14 WatchSource:0}: Error finding container 023c5fac597fe305da5823d9c94ae59a58fa1c5d4b2ff6a9fb79fd1195e7cc14: Status 404 returned error can't find the container with id 023c5fac597fe305da5823d9c94ae59a58fa1c5d4b2ff6a9fb79fd1195e7cc14 Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.839004 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:56Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.870703 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:56Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.910572 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:56Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.946746 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:56Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:56 crc kubenswrapper[4706]: I1125 11:36:56.988190 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:56Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.029248 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ec2e656-a68d-4339-92d5-0c157f7f7783\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w54mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:57Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.073966 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:57Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.075210 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lpc7s" event={"ID":"3ec2e656-a68d-4339-92d5-0c157f7f7783","Type":"ContainerStarted","Data":"c3a1481dd8cb88b79d8addfbfd40caf18850769e4492c2af316105b7f6779f9b"} Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.075325 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lpc7s" event={"ID":"3ec2e656-a68d-4339-92d5-0c157f7f7783","Type":"ContainerStarted","Data":"023c5fac597fe305da5823d9c94ae59a58fa1c5d4b2ff6a9fb79fd1195e7cc14"} Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.076871 4706 generic.go:334] "Generic (PLEG): container finished" podID="150b96fa-570a-4b32-a82a-3275127d5b51" containerID="f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6" exitCode=0 Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.076906 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" event={"ID":"150b96fa-570a-4b32-a82a-3275127d5b51","Type":"ContainerDied","Data":"f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6"} Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.110526 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:57Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.150560 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:57Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.190034 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:57Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.232445 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:57Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.271049 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:57Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.309389 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:57Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.348249 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ec2e656-a68d-4339-92d5-0c157f7f7783\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a1481dd8cb88b79d8addfbfd40caf18850769e4492c2af316105b7f6779f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w54mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:57Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.388816 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:57Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.429764 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:57Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.469959 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:57Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.515143 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:57Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.553920 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:57Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.591426 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:57Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.634123 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:57Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.663549 4706 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.665416 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.665452 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.665465 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.665614 4706 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.670024 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:57Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.720070 4706 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.720402 4706 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.721392 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.721429 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.721440 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.721458 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.721526 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:36:57Z","lastTransitionTime":"2025-11-25T11:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:36:57 crc kubenswrapper[4706]: E1125 11:36:57.736506 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:57Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.740729 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.740780 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.740792 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.740820 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.740836 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:36:57Z","lastTransitionTime":"2025-11-25T11:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:36:57 crc kubenswrapper[4706]: E1125 11:36:57.754170 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:57Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.756064 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:57Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.759823 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.759871 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.759881 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.759903 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.759919 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:36:57Z","lastTransitionTime":"2025-11-25T11:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:36:57 crc kubenswrapper[4706]: E1125 11:36:57.772986 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:57Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.777413 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.777458 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.777467 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.777485 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.777495 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:36:57Z","lastTransitionTime":"2025-11-25T11:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:36:57 crc kubenswrapper[4706]: E1125 11:36:57.790314 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:57Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.790576 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:57Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.794203 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.794253 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.794269 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.794290 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.794321 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:36:57Z","lastTransitionTime":"2025-11-25T11:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:36:57 crc kubenswrapper[4706]: E1125 11:36:57.806813 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:57Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:57 crc kubenswrapper[4706]: E1125 11:36:57.806937 4706 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.808804 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.808838 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.808847 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.808867 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.808878 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:36:57Z","lastTransitionTime":"2025-11-25T11:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.831347 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:57Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.869905 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:57Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.907615 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:57Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.911785 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.911824 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.911838 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.911854 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.911864 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:36:57Z","lastTransitionTime":"2025-11-25T11:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.921463 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.921466 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:36:57 crc kubenswrapper[4706]: E1125 11:36:57.921644 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.921487 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:36:57 crc kubenswrapper[4706]: E1125 11:36:57.921744 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:36:57 crc kubenswrapper[4706]: E1125 11:36:57.921904 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:36:57 crc kubenswrapper[4706]: I1125 11:36:57.953782 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:57Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.015160 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.015203 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.015211 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.015228 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.015238 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:36:58Z","lastTransitionTime":"2025-11-25T11:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.083516 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" event={"ID":"f1218bae-4153-4490-8847-ab2d07ca0ab6","Type":"ContainerStarted","Data":"62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188"} Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.085585 4706 generic.go:334] "Generic (PLEG): container finished" podID="150b96fa-570a-4b32-a82a-3275127d5b51" containerID="4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52" exitCode=0 Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.085647 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" event={"ID":"150b96fa-570a-4b32-a82a-3275127d5b51","Type":"ContainerDied","Data":"4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52"} Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.100916 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:58Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.113025 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:58Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.117152 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.117195 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.117208 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.117226 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.117240 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:36:58Z","lastTransitionTime":"2025-11-25T11:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.125728 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:58Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.137792 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:58Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.148476 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ec2e656-a68d-4339-92d5-0c157f7f7783\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a1481dd8cb88b79d8addfbfd40caf18850769e4492c2af316105b7f6779f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w54mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:58Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.188278 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:58Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.219802 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.219828 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.219837 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.219854 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.219867 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:36:58Z","lastTransitionTime":"2025-11-25T11:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.229211 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:58Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.269827 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:58Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.310930 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:58Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.322988 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.323420 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.323431 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.323451 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.323463 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:36:58Z","lastTransitionTime":"2025-11-25T11:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.379764 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:58Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.398380 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:58Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.426167 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.426210 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.426223 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.426252 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.426274 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:36:58Z","lastTransitionTime":"2025-11-25T11:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.433647 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:58Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.470810 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:58Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.517886 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:58Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.529658 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.529693 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.529707 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.529724 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.529736 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:36:58Z","lastTransitionTime":"2025-11-25T11:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.548439 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:58Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.632273 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.632316 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.632329 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.632347 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.632357 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:36:58Z","lastTransitionTime":"2025-11-25T11:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.735052 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.735116 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.735131 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.735153 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.735164 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:36:58Z","lastTransitionTime":"2025-11-25T11:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.837314 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.837368 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.837377 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.837397 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.837407 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:36:58Z","lastTransitionTime":"2025-11-25T11:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.940190 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.940776 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.940793 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.940812 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:36:58 crc kubenswrapper[4706]: I1125 11:36:58.940824 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:36:58Z","lastTransitionTime":"2025-11-25T11:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.044330 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.044926 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.044938 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.044957 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.044982 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:36:59Z","lastTransitionTime":"2025-11-25T11:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.091901 4706 generic.go:334] "Generic (PLEG): container finished" podID="150b96fa-570a-4b32-a82a-3275127d5b51" containerID="0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e" exitCode=0 Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.091952 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" event={"ID":"150b96fa-570a-4b32-a82a-3275127d5b51","Type":"ContainerDied","Data":"0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e"} Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.106553 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:59Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.121531 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ec2e656-a68d-4339-92d5-0c157f7f7783\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a1481dd8cb88b79d8addfbfd40caf18850769e4492c2af316105b7f6779f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w54mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:59Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.136565 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:59Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.147363 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.147765 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.147843 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.147916 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.147981 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:36:59Z","lastTransitionTime":"2025-11-25T11:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.152076 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:59Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.170398 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:59Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.183718 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:59Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.197551 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:59Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.209791 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:59Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.223049 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:59Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.243583 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:59Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.250308 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.250348 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.250359 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.250378 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.250391 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:36:59Z","lastTransitionTime":"2025-11-25T11:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.257013 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:59Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.271130 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:59Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.287413 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:59Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.299125 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:59Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.309638 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:36:59Z is after 2025-08-24T17:21:41Z" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.352991 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.353033 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.353042 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.353058 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.353069 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:36:59Z","lastTransitionTime":"2025-11-25T11:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.456135 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.456181 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.456192 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.456211 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.456223 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:36:59Z","lastTransitionTime":"2025-11-25T11:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.558815 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.558909 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.558926 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.558948 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.558962 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:36:59Z","lastTransitionTime":"2025-11-25T11:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.590943 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.591125 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:36:59 crc kubenswrapper[4706]: E1125 11:36:59.591235 4706 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 11:36:59 crc kubenswrapper[4706]: E1125 11:36:59.591294 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 11:37:07.591277323 +0000 UTC m=+36.505834704 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.591481 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.591561 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:36:59 crc kubenswrapper[4706]: E1125 11:36:59.591673 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:37:07.591659591 +0000 UTC m=+36.506217112 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:36:59 crc kubenswrapper[4706]: E1125 11:36:59.591705 4706 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 11:36:59 crc kubenswrapper[4706]: E1125 11:36:59.591755 4706 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 11:36:59 crc kubenswrapper[4706]: E1125 11:36:59.591757 4706 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 11:36:59 crc kubenswrapper[4706]: E1125 11:36:59.591775 4706 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 11:36:59 crc kubenswrapper[4706]: E1125 11:36:59.591855 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 11:37:07.591832424 +0000 UTC m=+36.506390005 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 11:36:59 crc kubenswrapper[4706]: E1125 11:36:59.591877 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 11:37:07.591868105 +0000 UTC m=+36.506425726 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.661818 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.661862 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.661871 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.661888 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.661902 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:36:59Z","lastTransitionTime":"2025-11-25T11:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.693009 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:36:59 crc kubenswrapper[4706]: E1125 11:36:59.693175 4706 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 11:36:59 crc kubenswrapper[4706]: E1125 11:36:59.693203 4706 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 11:36:59 crc kubenswrapper[4706]: E1125 11:36:59.693216 4706 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 11:36:59 crc kubenswrapper[4706]: E1125 11:36:59.693280 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 11:37:07.693264497 +0000 UTC m=+36.607821878 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.764758 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.764808 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.764819 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.764837 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.764849 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:36:59Z","lastTransitionTime":"2025-11-25T11:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.867747 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.867797 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.867808 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.867826 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.867838 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:36:59Z","lastTransitionTime":"2025-11-25T11:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.922119 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.922120 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:36:59 crc kubenswrapper[4706]: E1125 11:36:59.922687 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.922142 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:36:59 crc kubenswrapper[4706]: E1125 11:36:59.922887 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:36:59 crc kubenswrapper[4706]: E1125 11:36:59.923035 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.973386 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.973438 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.973451 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.973471 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:36:59 crc kubenswrapper[4706]: I1125 11:36:59.973489 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:36:59Z","lastTransitionTime":"2025-11-25T11:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.076363 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.076408 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.076424 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.076442 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.076452 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:00Z","lastTransitionTime":"2025-11-25T11:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.100783 4706 generic.go:334] "Generic (PLEG): container finished" podID="150b96fa-570a-4b32-a82a-3275127d5b51" containerID="29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855" exitCode=0 Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.100874 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" event={"ID":"150b96fa-570a-4b32-a82a-3275127d5b51","Type":"ContainerDied","Data":"29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855"} Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.106269 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" event={"ID":"f1218bae-4153-4490-8847-ab2d07ca0ab6","Type":"ContainerStarted","Data":"b1486d0475f4d248f425b711ee757032370a9bdddb8d33c83ba9db41549d1dd9"} Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.106866 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.125842 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:00Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.138719 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:00Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.138792 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.163707 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:00Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.177138 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:00Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.179032 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.179060 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.179068 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.179083 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.179093 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:00Z","lastTransitionTime":"2025-11-25T11:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.188837 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:00Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.199624 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:00Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.213231 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:00Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.225414 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ec2e656-a68d-4339-92d5-0c157f7f7783\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a1481dd8cb88b79d8addfbfd40caf18850769e4492c2af316105b7f6779f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w54mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:00Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.239894 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:00Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.255474 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:00Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.269355 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:00Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.280993 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:00Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.282555 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.282589 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.282598 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.282616 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.282628 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:00Z","lastTransitionTime":"2025-11-25T11:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.293035 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:00Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.307526 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:00Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.328782 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:00Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.346413 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:00Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.370576 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:00Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.383667 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:00Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.384704 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.384734 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.384745 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.384770 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.384783 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:00Z","lastTransitionTime":"2025-11-25T11:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.397574 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:00Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.411765 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:00Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.427132 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:00Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.443551 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:00Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.457465 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ec2e656-a68d-4339-92d5-0c157f7f7783\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a1481dd8cb88b79d8addfbfd40caf18850769e4492c2af316105b7f6779f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w54mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:00Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.473673 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:00Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.487624 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.487661 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.487672 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.487690 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.487700 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:00Z","lastTransitionTime":"2025-11-25T11:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.491131 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:00Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.504847 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:00Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.521164 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:00Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.541151 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1486d0475f4d248f425b711ee757032370a9bdddb8d33c83ba9db41549d1dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:00Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.554372 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:00Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.566992 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:00Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.591247 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.591335 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.591346 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.591369 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.591385 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:00Z","lastTransitionTime":"2025-11-25T11:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.693803 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.694130 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.694225 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.694335 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.694640 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:00Z","lastTransitionTime":"2025-11-25T11:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.797218 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.797260 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.797269 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.797286 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.797313 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:00Z","lastTransitionTime":"2025-11-25T11:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.900846 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.900883 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.900893 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.900910 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:00 crc kubenswrapper[4706]: I1125 11:37:00.900919 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:00Z","lastTransitionTime":"2025-11-25T11:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.004173 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.004218 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.004226 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.004245 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.004266 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:01Z","lastTransitionTime":"2025-11-25T11:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.106290 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.106338 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.106347 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.106362 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.106371 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:01Z","lastTransitionTime":"2025-11-25T11:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.111842 4706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.111898 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" event={"ID":"150b96fa-570a-4b32-a82a-3275127d5b51","Type":"ContainerStarted","Data":"b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436"} Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.112447 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.126119 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.133906 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.138207 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.147132 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.156943 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.166751 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ec2e656-a68d-4339-92d5-0c157f7f7783\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a1481dd8cb88b79d8addfbfd40caf18850769e4492c2af316105b7f6779f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w54mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.179097 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.196773 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1486d0475f4d248f425b711ee757032370a9bdddb8d33c83ba9db41549d1dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.209114 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.210009 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.210049 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.210078 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.210094 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:01Z","lastTransitionTime":"2025-11-25T11:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.212788 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.225505 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.238033 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.255520 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.267589 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.289768 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.301818 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.313163 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.313206 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.313218 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.313238 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.313251 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:01Z","lastTransitionTime":"2025-11-25T11:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.317744 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.331691 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.344928 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.356556 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.370988 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.381722 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ec2e656-a68d-4339-92d5-0c157f7f7783\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a1481dd8cb88b79d8addfbfd40caf18850769e4492c2af316105b7f6779f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w54mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.394515 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.408101 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.415976 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.416003 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.416011 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.416026 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.416035 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:01Z","lastTransitionTime":"2025-11-25T11:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.456804 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1486d0475f4d248f425b711ee757032370a9bdddb8d33c83ba9db41549d1dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.476840 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.488890 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.501526 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.513581 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.518055 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.518114 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.518127 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.518150 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.518165 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:01Z","lastTransitionTime":"2025-11-25T11:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.533507 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.548121 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.569540 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.621026 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.621072 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.621089 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.621123 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.621141 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:01Z","lastTransitionTime":"2025-11-25T11:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.724125 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.724173 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.724185 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.724203 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.724215 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:01Z","lastTransitionTime":"2025-11-25T11:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.826822 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.827082 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.827165 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.827278 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.827400 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:01Z","lastTransitionTime":"2025-11-25T11:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.921232 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.921368 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:37:01 crc kubenswrapper[4706]: E1125 11:37:01.921520 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.921239 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:37:01 crc kubenswrapper[4706]: E1125 11:37:01.921731 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:37:01 crc kubenswrapper[4706]: E1125 11:37:01.921828 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.930382 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.930431 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.930448 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.930554 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.930597 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:01Z","lastTransitionTime":"2025-11-25T11:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.950234 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.965583 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:01 crc kubenswrapper[4706]: I1125 11:37:01.981371 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.003772 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.014869 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.023411 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.032979 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.033017 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.033026 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.033043 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.033054 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:02Z","lastTransitionTime":"2025-11-25T11:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.037513 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.048852 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ec2e656-a68d-4339-92d5-0c157f7f7783\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a1481dd8cb88b79d8addfbfd40caf18850769e4492c2af316105b7f6779f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w54mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.060796 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.072661 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.085441 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.101924 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1486d0475f4d248f425b711ee757032370a9bdddb8d33c83ba9db41549d1dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.114013 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.118535 4706 generic.go:334] "Generic (PLEG): container finished" podID="150b96fa-570a-4b32-a82a-3275127d5b51" containerID="b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436" exitCode=0 Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.118574 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" event={"ID":"150b96fa-570a-4b32-a82a-3275127d5b51","Type":"ContainerDied","Data":"b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436"} Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.118694 4706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.127834 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.136359 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.136395 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.136404 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.136423 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.136434 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:02Z","lastTransitionTime":"2025-11-25T11:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.151098 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.188455 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.225844 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ec2e656-a68d-4339-92d5-0c157f7f7783\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a1481dd8cb88b79d8addfbfd40caf18850769e4492c2af316105b7f6779f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w54mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.243620 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.243753 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.243832 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.243926 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.244003 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:02Z","lastTransitionTime":"2025-11-25T11:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.270449 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.310530 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.347108 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.347147 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.347155 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.347171 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.347180 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:02Z","lastTransitionTime":"2025-11-25T11:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.350223 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.389538 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.433062 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.449518 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.449561 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.449574 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.449594 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.449607 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:02Z","lastTransitionTime":"2025-11-25T11:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.478649 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1486d0475f4d248f425b711ee757032370a9bdddb8d33c83ba9db41549d1dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.513624 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.550027 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.552393 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.552446 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.552457 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.552479 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.552492 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:02Z","lastTransitionTime":"2025-11-25T11:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.594057 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.636195 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.655368 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.655416 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.655425 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.655442 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.655453 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:02Z","lastTransitionTime":"2025-11-25T11:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.667174 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.709835 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.752222 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.758176 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.758258 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.758269 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.758288 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.758324 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:02Z","lastTransitionTime":"2025-11-25T11:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.861643 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.861715 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.861746 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.861785 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:02 crc kubenswrapper[4706]: I1125 11:37:02.861812 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:02Z","lastTransitionTime":"2025-11-25T11:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.035891 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.035956 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.035967 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.035996 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.036006 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:03Z","lastTransitionTime":"2025-11-25T11:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.124488 4706 generic.go:334] "Generic (PLEG): container finished" podID="150b96fa-570a-4b32-a82a-3275127d5b51" containerID="c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9" exitCode=0 Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.124557 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" event={"ID":"150b96fa-570a-4b32-a82a-3275127d5b51","Type":"ContainerDied","Data":"c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9"} Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.124666 4706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.138065 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.138095 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.138105 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.138120 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.138130 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:03Z","lastTransitionTime":"2025-11-25T11:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.142398 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:03Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.155783 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ec2e656-a68d-4339-92d5-0c157f7f7783\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a1481dd8cb88b79d8addfbfd40caf18850769e4492c2af316105b7f6779f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w54mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:03Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.177725 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1486d0475f4d248f425b711ee757032370a9bdddb8d33c83ba9db41549d1dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:03Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.195394 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:03Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.211740 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:03Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.241968 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.242033 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.242045 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.242081 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.242092 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:03Z","lastTransitionTime":"2025-11-25T11:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.267329 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:03Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.287707 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:03Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.302329 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:03Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.316648 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:03Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.339149 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:03Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.344273 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.344338 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.344351 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.344371 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.344386 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:03Z","lastTransitionTime":"2025-11-25T11:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.353013 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:03Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.369936 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:03Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.385258 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:03Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.397966 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:03Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.408541 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:03Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.446744 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.446798 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.446811 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.446830 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.446842 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:03Z","lastTransitionTime":"2025-11-25T11:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.549119 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.549177 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.549188 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.549211 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.549223 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:03Z","lastTransitionTime":"2025-11-25T11:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.652983 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.653339 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.653353 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.653376 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.653389 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:03Z","lastTransitionTime":"2025-11-25T11:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.755622 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.755665 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.755675 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.755694 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.755705 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:03Z","lastTransitionTime":"2025-11-25T11:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.858757 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.858797 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.858807 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.858822 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.858833 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:03Z","lastTransitionTime":"2025-11-25T11:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.921608 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.921679 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.921743 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:37:03 crc kubenswrapper[4706]: E1125 11:37:03.921792 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:37:03 crc kubenswrapper[4706]: E1125 11:37:03.921927 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:37:03 crc kubenswrapper[4706]: E1125 11:37:03.921998 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.961809 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.962123 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.962253 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.962377 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:03 crc kubenswrapper[4706]: I1125 11:37:03.962477 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:03Z","lastTransitionTime":"2025-11-25T11:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.065410 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.065461 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.065469 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.065487 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.065496 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:04Z","lastTransitionTime":"2025-11-25T11:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.129296 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9rpr_f1218bae-4153-4490-8847-ab2d07ca0ab6/ovnkube-controller/0.log" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.132527 4706 generic.go:334] "Generic (PLEG): container finished" podID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerID="b1486d0475f4d248f425b711ee757032370a9bdddb8d33c83ba9db41549d1dd9" exitCode=1 Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.132615 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" event={"ID":"f1218bae-4153-4490-8847-ab2d07ca0ab6","Type":"ContainerDied","Data":"b1486d0475f4d248f425b711ee757032370a9bdddb8d33c83ba9db41549d1dd9"} Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.133479 4706 scope.go:117] "RemoveContainer" containerID="b1486d0475f4d248f425b711ee757032370a9bdddb8d33c83ba9db41549d1dd9" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.136790 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" event={"ID":"150b96fa-570a-4b32-a82a-3275127d5b51","Type":"ContainerStarted","Data":"de18c07bf8490d7495947e9a271e3e7273b9ffdcc43afd2a0468394af0ae0b0d"} Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.151887 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:04Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.162199 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ec2e656-a68d-4339-92d5-0c157f7f7783\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a1481dd8cb88b79d8addfbfd40caf18850769e4492c2af316105b7f6779f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w54mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:04Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.170259 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.170333 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.170342 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.170361 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.170371 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:04Z","lastTransitionTime":"2025-11-25T11:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.174641 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:04Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.189000 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:04Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.209695 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1486d0475f4d248f425b711ee757032370a9bdddb8d33c83ba9db41549d1dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1486d0475f4d248f425b711ee757032370a9bdddb8d33c83ba9db41549d1dd9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:03Z\\\",\\\"message\\\":\\\"ler/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 11:37:03.877645 5942 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 11:37:03.877722 5942 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 11:37:03.877827 5942 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 11:37:03.877916 5942 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1125 11:37:03.878321 5942 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 11:37:03.878375 5942 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 11:37:03.878382 5942 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 11:37:03.878437 5942 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 11:37:03.878443 5942 factory.go:656] Stopping watch factory\\\\nI1125 11:37:03.878453 5942 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 11:37:03.878465 5942 ovnkube.go:599] Stopped ovnkube\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:04Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.224349 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:04Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.241110 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:04Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.260285 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:04Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.274862 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.274907 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.274918 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.274937 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.274947 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:04Z","lastTransitionTime":"2025-11-25T11:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.275317 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:04Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.299133 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:04Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.314748 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:04Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.330432 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:04Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.351226 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:04Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.366889 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:04Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.377106 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.377155 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.377165 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.377216 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.377229 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:04Z","lastTransitionTime":"2025-11-25T11:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.381429 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:04Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.402839 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:04Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.416858 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:04Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.434156 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de18c07bf8490d7495947e9a271e3e7273b9ffdcc43afd2a0468394af0ae0b0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:04Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.450446 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:04Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.464455 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:04Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.477411 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:04Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.480106 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.480135 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.480144 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.480159 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.480168 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:04Z","lastTransitionTime":"2025-11-25T11:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.492208 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:04Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.504203 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ec2e656-a68d-4339-92d5-0c157f7f7783\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a1481dd8cb88b79d8addfbfd40caf18850769e4492c2af316105b7f6779f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w54mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:04Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.520022 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:04Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.557615 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:04Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.572468 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:04Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.582571 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.582610 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.582621 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.582639 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.582652 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:04Z","lastTransitionTime":"2025-11-25T11:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.589479 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:04Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.602791 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:04Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.614781 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:04Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.633793 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1486d0475f4d248f425b711ee757032370a9bdddb8d33c83ba9db41549d1dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1486d0475f4d248f425b711ee757032370a9bdddb8d33c83ba9db41549d1dd9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:03Z\\\",\\\"message\\\":\\\"ler/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 11:37:03.877645 5942 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 11:37:03.877722 5942 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 11:37:03.877827 5942 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 11:37:03.877916 5942 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1125 11:37:03.878321 5942 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 11:37:03.878375 5942 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 11:37:03.878382 5942 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 11:37:03.878437 5942 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 11:37:03.878443 5942 factory.go:656] Stopping watch factory\\\\nI1125 11:37:03.878453 5942 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 11:37:03.878465 5942 ovnkube.go:599] Stopped ovnkube\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:04Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.685422 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.685491 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.685503 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.685521 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.685534 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:04Z","lastTransitionTime":"2025-11-25T11:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.788748 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.788811 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.788828 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.788853 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.788869 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:04Z","lastTransitionTime":"2025-11-25T11:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.891339 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.891386 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.891395 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.891414 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.891425 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:04Z","lastTransitionTime":"2025-11-25T11:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.994109 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.994467 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.994565 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.994650 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:04 crc kubenswrapper[4706]: I1125 11:37:04.994709 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:04Z","lastTransitionTime":"2025-11-25T11:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.098059 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.098383 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.098465 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.098540 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.098599 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:05Z","lastTransitionTime":"2025-11-25T11:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.141041 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9rpr_f1218bae-4153-4490-8847-ab2d07ca0ab6/ovnkube-controller/1.log" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.141602 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9rpr_f1218bae-4153-4490-8847-ab2d07ca0ab6/ovnkube-controller/0.log" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.143847 4706 generic.go:334] "Generic (PLEG): container finished" podID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerID="408d84ea146425bb2b2ac6cfb181cd139a8465caa12eb3d4b0e2b738d1f52484" exitCode=1 Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.143902 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" event={"ID":"f1218bae-4153-4490-8847-ab2d07ca0ab6","Type":"ContainerDied","Data":"408d84ea146425bb2b2ac6cfb181cd139a8465caa12eb3d4b0e2b738d1f52484"} Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.143947 4706 scope.go:117] "RemoveContainer" containerID="b1486d0475f4d248f425b711ee757032370a9bdddb8d33c83ba9db41549d1dd9" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.144760 4706 scope.go:117] "RemoveContainer" containerID="408d84ea146425bb2b2ac6cfb181cd139a8465caa12eb3d4b0e2b738d1f52484" Nov 25 11:37:05 crc kubenswrapper[4706]: E1125 11:37:05.144930 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-q9rpr_openshift-ovn-kubernetes(f1218bae-4153-4490-8847-ab2d07ca0ab6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.167346 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:05Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.179547 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:05Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.193051 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de18c07bf8490d7495947e9a271e3e7273b9ffdcc43afd2a0468394af0ae0b0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:05Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.200755 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.200959 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.201045 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.201131 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.201235 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:05Z","lastTransitionTime":"2025-11-25T11:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.207115 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:05Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.218939 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:05Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.229842 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:05Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.242190 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:05Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.255481 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ec2e656-a68d-4339-92d5-0c157f7f7783\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a1481dd8cb88b79d8addfbfd40caf18850769e4492c2af316105b7f6779f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w54mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:05Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.276006 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:05Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.291767 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:05Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.304464 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.304527 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.304538 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.304556 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.304568 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:05Z","lastTransitionTime":"2025-11-25T11:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.309998 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408d84ea146425bb2b2ac6cfb181cd139a8465caa12eb3d4b0e2b738d1f52484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1486d0475f4d248f425b711ee757032370a9bdddb8d33c83ba9db41549d1dd9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:03Z\\\",\\\"message\\\":\\\"ler/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 11:37:03.877645 5942 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 11:37:03.877722 5942 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 11:37:03.877827 5942 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 11:37:03.877916 5942 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1125 11:37:03.878321 5942 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 11:37:03.878375 5942 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 11:37:03.878382 5942 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 11:37:03.878437 5942 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 11:37:03.878443 5942 factory.go:656] Stopping watch factory\\\\nI1125 11:37:03.878453 5942 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 11:37:03.878465 5942 ovnkube.go:599] Stopped ovnkube\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408d84ea146425bb2b2ac6cfb181cd139a8465caa12eb3d4b0e2b738d1f52484\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:05Z\\\",\\\"message\\\":\\\"alse]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 11:37:04.999467 6129 obj_retry.go:409] Going to retry *v1.Pod resource setup for 14 objects: [openshift-kube-controller-manager/kube-controller-manager-crc openshift-machine-config-operator/machine-config-daemon-dhfpm openshift-multus/multus-additional-cni-plugins-cjmvf openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-dns/node-resolver-nh9sc openshift-image-registry/node-ca-lpc7s openshift-multus/multus-s47nr openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-q9rpr openshift-etcd/etcd-crc openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-node-identity/network-node-identity-vrzqb openshift-kube-apiserver/kube-apiserver-crc openshift-network-operator/iptables-alerter-4ln5h]\\\\nF1125 11:37:04.999486 6129 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller ini\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:05Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.323977 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:05Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.338374 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:05Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.351271 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:05Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.362158 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:05Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.409105 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.409160 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.409172 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.409198 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.409210 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:05Z","lastTransitionTime":"2025-11-25T11:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.511828 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.511876 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.511887 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.511908 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.511918 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:05Z","lastTransitionTime":"2025-11-25T11:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.615049 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.615096 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.615105 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.615122 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.615134 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:05Z","lastTransitionTime":"2025-11-25T11:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.717873 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.717960 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.717972 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.717992 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.718006 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:05Z","lastTransitionTime":"2025-11-25T11:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.821095 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.821136 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.821147 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.821163 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.821172 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:05Z","lastTransitionTime":"2025-11-25T11:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.921478 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.921537 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:37:05 crc kubenswrapper[4706]: E1125 11:37:05.921672 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.921741 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:37:05 crc kubenswrapper[4706]: E1125 11:37:05.921843 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:37:05 crc kubenswrapper[4706]: E1125 11:37:05.922010 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.924691 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.924743 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.924756 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.924775 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:05 crc kubenswrapper[4706]: I1125 11:37:05.924789 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:05Z","lastTransitionTime":"2025-11-25T11:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.027484 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.027521 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.027531 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.027550 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.027563 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:06Z","lastTransitionTime":"2025-11-25T11:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.130471 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.130839 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.130932 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.130996 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.131053 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:06Z","lastTransitionTime":"2025-11-25T11:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.149269 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9rpr_f1218bae-4153-4490-8847-ab2d07ca0ab6/ovnkube-controller/1.log" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.234027 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.234062 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.234071 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.234090 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.234099 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:06Z","lastTransitionTime":"2025-11-25T11:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.336168 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.336230 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.336254 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.336276 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.336289 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:06Z","lastTransitionTime":"2025-11-25T11:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.438922 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.438973 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.438985 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.439005 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.439018 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:06Z","lastTransitionTime":"2025-11-25T11:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.541507 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.541556 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.541569 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.541590 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.541605 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:06Z","lastTransitionTime":"2025-11-25T11:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.557554 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz"] Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.558110 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.560345 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.560560 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.574594 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:06Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.584964 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:06Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.596858 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc09de93-57e8-4697-8ce8-70bfc1b693e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qkkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:06Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.612976 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:06Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.626016 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ec2e656-a68d-4339-92d5-0c157f7f7783\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a1481dd8cb88b79d8addfbfd40caf18850769e4492c2af316105b7f6779f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w54mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:06Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.641545 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:06Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.647077 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.647129 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.647139 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.647160 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.647172 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:06Z","lastTransitionTime":"2025-11-25T11:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.663491 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:06Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.675763 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dc09de93-57e8-4697-8ce8-70bfc1b693e8-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qkkfz\" (UID: \"dc09de93-57e8-4697-8ce8-70bfc1b693e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.675833 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmrl8\" (UniqueName: \"kubernetes.io/projected/dc09de93-57e8-4697-8ce8-70bfc1b693e8-kube-api-access-hmrl8\") pod \"ovnkube-control-plane-749d76644c-qkkfz\" (UID: \"dc09de93-57e8-4697-8ce8-70bfc1b693e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.675891 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dc09de93-57e8-4697-8ce8-70bfc1b693e8-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qkkfz\" (UID: \"dc09de93-57e8-4697-8ce8-70bfc1b693e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.676104 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dc09de93-57e8-4697-8ce8-70bfc1b693e8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qkkfz\" (UID: \"dc09de93-57e8-4697-8ce8-70bfc1b693e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.680123 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:06Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.695815 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:06Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.710854 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:06Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.725178 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:06Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.744672 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408d84ea146425bb2b2ac6cfb181cd139a8465caa12eb3d4b0e2b738d1f52484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1486d0475f4d248f425b711ee757032370a9bdddb8d33c83ba9db41549d1dd9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:03Z\\\",\\\"message\\\":\\\"ler/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 11:37:03.877645 5942 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 11:37:03.877722 5942 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 11:37:03.877827 5942 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 11:37:03.877916 5942 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1125 11:37:03.878321 5942 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 11:37:03.878375 5942 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 11:37:03.878382 5942 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 11:37:03.878437 5942 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 11:37:03.878443 5942 factory.go:656] Stopping watch factory\\\\nI1125 11:37:03.878453 5942 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 11:37:03.878465 5942 ovnkube.go:599] Stopped ovnkube\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408d84ea146425bb2b2ac6cfb181cd139a8465caa12eb3d4b0e2b738d1f52484\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:05Z\\\",\\\"message\\\":\\\"alse]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 11:37:04.999467 6129 obj_retry.go:409] Going to retry *v1.Pod resource setup for 14 objects: [openshift-kube-controller-manager/kube-controller-manager-crc openshift-machine-config-operator/machine-config-daemon-dhfpm openshift-multus/multus-additional-cni-plugins-cjmvf openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-dns/node-resolver-nh9sc openshift-image-registry/node-ca-lpc7s openshift-multus/multus-s47nr openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-q9rpr openshift-etcd/etcd-crc openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-node-identity/network-node-identity-vrzqb openshift-kube-apiserver/kube-apiserver-crc openshift-network-operator/iptables-alerter-4ln5h]\\\\nF1125 11:37:04.999486 6129 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller ini\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:06Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.749475 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.749529 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.749543 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.749562 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.749574 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:06Z","lastTransitionTime":"2025-11-25T11:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.759905 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:06Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.777674 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dc09de93-57e8-4697-8ce8-70bfc1b693e8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qkkfz\" (UID: \"dc09de93-57e8-4697-8ce8-70bfc1b693e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.777778 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dc09de93-57e8-4697-8ce8-70bfc1b693e8-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qkkfz\" (UID: \"dc09de93-57e8-4697-8ce8-70bfc1b693e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.777807 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmrl8\" (UniqueName: \"kubernetes.io/projected/dc09de93-57e8-4697-8ce8-70bfc1b693e8-kube-api-access-hmrl8\") pod \"ovnkube-control-plane-749d76644c-qkkfz\" (UID: \"dc09de93-57e8-4697-8ce8-70bfc1b693e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.777837 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dc09de93-57e8-4697-8ce8-70bfc1b693e8-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qkkfz\" (UID: \"dc09de93-57e8-4697-8ce8-70bfc1b693e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.778475 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dc09de93-57e8-4697-8ce8-70bfc1b693e8-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qkkfz\" (UID: \"dc09de93-57e8-4697-8ce8-70bfc1b693e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.778976 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dc09de93-57e8-4697-8ce8-70bfc1b693e8-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qkkfz\" (UID: \"dc09de93-57e8-4697-8ce8-70bfc1b693e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.782421 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:06Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.785760 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dc09de93-57e8-4697-8ce8-70bfc1b693e8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qkkfz\" (UID: \"dc09de93-57e8-4697-8ce8-70bfc1b693e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.797282 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:06Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.797414 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmrl8\" (UniqueName: \"kubernetes.io/projected/dc09de93-57e8-4697-8ce8-70bfc1b693e8-kube-api-access-hmrl8\") pod \"ovnkube-control-plane-749d76644c-qkkfz\" (UID: \"dc09de93-57e8-4697-8ce8-70bfc1b693e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.813604 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de18c07bf8490d7495947e9a271e3e7273b9ffdcc43afd2a0468394af0ae0b0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:06Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.852447 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.852505 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.852518 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.852540 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.852556 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:06Z","lastTransitionTime":"2025-11-25T11:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.869760 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" Nov 25 11:37:06 crc kubenswrapper[4706]: W1125 11:37:06.884543 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc09de93_57e8_4697_8ce8_70bfc1b693e8.slice/crio-a876e4b78d198f3459b47056454b241e4ada46081cfd4ffdfdedd6a978f2ed2a WatchSource:0}: Error finding container a876e4b78d198f3459b47056454b241e4ada46081cfd4ffdfdedd6a978f2ed2a: Status 404 returned error can't find the container with id a876e4b78d198f3459b47056454b241e4ada46081cfd4ffdfdedd6a978f2ed2a Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.955899 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.955940 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.955949 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.955966 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:06 crc kubenswrapper[4706]: I1125 11:37:06.955976 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:06Z","lastTransitionTime":"2025-11-25T11:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.058669 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.058724 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.058737 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.058755 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.058765 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:07Z","lastTransitionTime":"2025-11-25T11:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.156624 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" event={"ID":"dc09de93-57e8-4697-8ce8-70bfc1b693e8","Type":"ContainerStarted","Data":"a876e4b78d198f3459b47056454b241e4ada46081cfd4ffdfdedd6a978f2ed2a"} Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.161050 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.161112 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.161124 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.161144 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.161159 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:07Z","lastTransitionTime":"2025-11-25T11:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.264398 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.264442 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.264453 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.264473 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.264484 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:07Z","lastTransitionTime":"2025-11-25T11:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.366951 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.366998 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.367012 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.367031 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.367049 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:07Z","lastTransitionTime":"2025-11-25T11:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.469578 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.469642 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.469654 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.469670 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.469680 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:07Z","lastTransitionTime":"2025-11-25T11:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.571930 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.571980 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.571996 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.572020 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.572034 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:07Z","lastTransitionTime":"2025-11-25T11:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.625943 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-l99rd"] Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.626467 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:07 crc kubenswrapper[4706]: E1125 11:37:07.626553 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.638339 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:07Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.652001 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:07Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.671018 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408d84ea146425bb2b2ac6cfb181cd139a8465caa12eb3d4b0e2b738d1f52484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1486d0475f4d248f425b711ee757032370a9bdddb8d33c83ba9db41549d1dd9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:03Z\\\",\\\"message\\\":\\\"ler/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 11:37:03.877645 5942 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 11:37:03.877722 5942 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 11:37:03.877827 5942 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 11:37:03.877916 5942 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1125 11:37:03.878321 5942 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 11:37:03.878375 5942 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 11:37:03.878382 5942 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 11:37:03.878437 5942 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 11:37:03.878443 5942 factory.go:656] Stopping watch factory\\\\nI1125 11:37:03.878453 5942 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 11:37:03.878465 5942 ovnkube.go:599] Stopped ovnkube\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408d84ea146425bb2b2ac6cfb181cd139a8465caa12eb3d4b0e2b738d1f52484\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:05Z\\\",\\\"message\\\":\\\"alse]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 11:37:04.999467 6129 obj_retry.go:409] Going to retry *v1.Pod resource setup for 14 objects: [openshift-kube-controller-manager/kube-controller-manager-crc openshift-machine-config-operator/machine-config-daemon-dhfpm openshift-multus/multus-additional-cni-plugins-cjmvf openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-dns/node-resolver-nh9sc openshift-image-registry/node-ca-lpc7s openshift-multus/multus-s47nr openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-q9rpr openshift-etcd/etcd-crc openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-node-identity/network-node-identity-vrzqb openshift-kube-apiserver/kube-apiserver-crc openshift-network-operator/iptables-alerter-4ln5h]\\\\nF1125 11:37:04.999486 6129 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller ini\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:07Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.675114 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.675160 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.675172 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.675194 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.675209 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:07Z","lastTransitionTime":"2025-11-25T11:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.685226 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:07Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.687423 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.687555 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.687593 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:37:07 crc kubenswrapper[4706]: E1125 11:37:07.687640 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:37:23.687606329 +0000 UTC m=+52.602163720 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:37:07 crc kubenswrapper[4706]: E1125 11:37:07.687717 4706 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.687768 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:37:07 crc kubenswrapper[4706]: E1125 11:37:07.687780 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 11:37:23.687759393 +0000 UTC m=+52.602316954 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 11:37:07 crc kubenswrapper[4706]: E1125 11:37:07.687791 4706 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 11:37:07 crc kubenswrapper[4706]: E1125 11:37:07.687857 4706 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 11:37:07 crc kubenswrapper[4706]: E1125 11:37:07.687860 4706 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 11:37:07 crc kubenswrapper[4706]: E1125 11:37:07.687872 4706 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 11:37:07 crc kubenswrapper[4706]: E1125 11:37:07.687920 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 11:37:23.687906177 +0000 UTC m=+52.602463728 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 11:37:07 crc kubenswrapper[4706]: E1125 11:37:07.687938 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 11:37:23.687930628 +0000 UTC m=+52.602488229 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.700254 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:07Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.713723 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:07Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.728693 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:07Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.751265 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:07Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.764189 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:07Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.778041 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.778074 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.778089 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.778109 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.778119 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:07Z","lastTransitionTime":"2025-11-25T11:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.778834 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de18c07bf8490d7495947e9a271e3e7273b9ffdcc43afd2a0468394af0ae0b0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:07Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.788551 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.788617 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmr9l\" (UniqueName: \"kubernetes.io/projected/14d69237-a4b7-43ea-ac81-f165eb532669-kube-api-access-mmr9l\") pod \"network-metrics-daemon-l99rd\" (UID: \"14d69237-a4b7-43ea-ac81-f165eb532669\") " pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.788665 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/14d69237-a4b7-43ea-ac81-f165eb532669-metrics-certs\") pod \"network-metrics-daemon-l99rd\" (UID: \"14d69237-a4b7-43ea-ac81-f165eb532669\") " pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:07 crc kubenswrapper[4706]: E1125 11:37:07.788796 4706 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 11:37:07 crc kubenswrapper[4706]: E1125 11:37:07.788812 4706 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 11:37:07 crc kubenswrapper[4706]: E1125 11:37:07.788823 4706 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 11:37:07 crc kubenswrapper[4706]: E1125 11:37:07.788862 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 11:37:23.788848848 +0000 UTC m=+52.703406219 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.791256 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l99rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14d69237-a4b7-43ea-ac81-f165eb532669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l99rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:07Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.805818 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:07Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.815555 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:07Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.828514 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:07Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.843381 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc09de93-57e8-4697-8ce8-70bfc1b693e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qkkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:07Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.861395 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:07Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.878013 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ec2e656-a68d-4339-92d5-0c157f7f7783\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a1481dd8cb88b79d8addfbfd40caf18850769e4492c2af316105b7f6779f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w54mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:07Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.881601 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.881648 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.881659 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.881680 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.881694 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:07Z","lastTransitionTime":"2025-11-25T11:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.889444 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmr9l\" (UniqueName: \"kubernetes.io/projected/14d69237-a4b7-43ea-ac81-f165eb532669-kube-api-access-mmr9l\") pod \"network-metrics-daemon-l99rd\" (UID: \"14d69237-a4b7-43ea-ac81-f165eb532669\") " pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.889516 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/14d69237-a4b7-43ea-ac81-f165eb532669-metrics-certs\") pod \"network-metrics-daemon-l99rd\" (UID: \"14d69237-a4b7-43ea-ac81-f165eb532669\") " pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:07 crc kubenswrapper[4706]: E1125 11:37:07.889717 4706 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 11:37:07 crc kubenswrapper[4706]: E1125 11:37:07.889796 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14d69237-a4b7-43ea-ac81-f165eb532669-metrics-certs podName:14d69237-a4b7-43ea-ac81-f165eb532669 nodeName:}" failed. No retries permitted until 2025-11-25 11:37:08.389779288 +0000 UTC m=+37.304336669 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/14d69237-a4b7-43ea-ac81-f165eb532669-metrics-certs") pod "network-metrics-daemon-l99rd" (UID: "14d69237-a4b7-43ea-ac81-f165eb532669") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.912052 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.912248 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.912367 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.912467 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.913057 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:07Z","lastTransitionTime":"2025-11-25T11:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.914918 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmr9l\" (UniqueName: \"kubernetes.io/projected/14d69237-a4b7-43ea-ac81-f165eb532669-kube-api-access-mmr9l\") pod \"network-metrics-daemon-l99rd\" (UID: \"14d69237-a4b7-43ea-ac81-f165eb532669\") " pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.922675 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.922773 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.922922 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:37:07 crc kubenswrapper[4706]: E1125 11:37:07.922999 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:37:07 crc kubenswrapper[4706]: E1125 11:37:07.923078 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:37:07 crc kubenswrapper[4706]: E1125 11:37:07.923097 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:37:07 crc kubenswrapper[4706]: E1125 11:37:07.927578 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:07Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.931780 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.932353 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.932507 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.932853 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.932961 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:07Z","lastTransitionTime":"2025-11-25T11:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:07 crc kubenswrapper[4706]: E1125 11:37:07.947030 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:07Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.951094 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.951132 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.951146 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.951170 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.951184 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:07Z","lastTransitionTime":"2025-11-25T11:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:07 crc kubenswrapper[4706]: E1125 11:37:07.963932 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:07Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.968784 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.969221 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.969241 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.969269 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.969293 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:07Z","lastTransitionTime":"2025-11-25T11:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:07 crc kubenswrapper[4706]: E1125 11:37:07.983974 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:07Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.988942 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.989113 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.989210 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.989326 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:07 crc kubenswrapper[4706]: I1125 11:37:07.989417 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:07Z","lastTransitionTime":"2025-11-25T11:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:08 crc kubenswrapper[4706]: E1125 11:37:08.001886 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:07Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:08 crc kubenswrapper[4706]: E1125 11:37:08.002406 4706 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.005662 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.005811 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.005872 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.005948 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.006012 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:08Z","lastTransitionTime":"2025-11-25T11:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.108090 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.108132 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.108141 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.108162 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.108174 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:08Z","lastTransitionTime":"2025-11-25T11:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.162246 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" event={"ID":"dc09de93-57e8-4697-8ce8-70bfc1b693e8","Type":"ContainerStarted","Data":"39eec3aac772cc9463505277d6b3f7cf2eb7621e4add4f14e53110e3db8c4cdc"} Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.162315 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" event={"ID":"dc09de93-57e8-4697-8ce8-70bfc1b693e8","Type":"ContainerStarted","Data":"6daff2070c60f609fd06be9589e3cd8d304d131f7b9669c7be4b8e9178df8f8b"} Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.175343 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ec2e656-a68d-4339-92d5-0c157f7f7783\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a1481dd8cb88b79d8addfbfd40caf18850769e4492c2af316105b7f6779f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w54mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:08Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.187856 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:08Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.204652 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:08Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.213913 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.213961 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.213971 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.213990 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.214001 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:08Z","lastTransitionTime":"2025-11-25T11:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.223849 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:08Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.236532 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:08Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.247800 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:08Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.260049 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:08Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.282544 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408d84ea146425bb2b2ac6cfb181cd139a8465caa12eb3d4b0e2b738d1f52484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1486d0475f4d248f425b711ee757032370a9bdddb8d33c83ba9db41549d1dd9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:03Z\\\",\\\"message\\\":\\\"ler/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 11:37:03.877645 5942 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 11:37:03.877722 5942 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 11:37:03.877827 5942 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 11:37:03.877916 5942 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1125 11:37:03.878321 5942 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 11:37:03.878375 5942 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 11:37:03.878382 5942 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 11:37:03.878437 5942 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 11:37:03.878443 5942 factory.go:656] Stopping watch factory\\\\nI1125 11:37:03.878453 5942 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 11:37:03.878465 5942 ovnkube.go:599] Stopped ovnkube\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408d84ea146425bb2b2ac6cfb181cd139a8465caa12eb3d4b0e2b738d1f52484\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:05Z\\\",\\\"message\\\":\\\"alse]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 11:37:04.999467 6129 obj_retry.go:409] Going to retry *v1.Pod resource setup for 14 objects: [openshift-kube-controller-manager/kube-controller-manager-crc openshift-machine-config-operator/machine-config-daemon-dhfpm openshift-multus/multus-additional-cni-plugins-cjmvf openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-dns/node-resolver-nh9sc openshift-image-registry/node-ca-lpc7s openshift-multus/multus-s47nr openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-q9rpr openshift-etcd/etcd-crc openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-node-identity/network-node-identity-vrzqb openshift-kube-apiserver/kube-apiserver-crc openshift-network-operator/iptables-alerter-4ln5h]\\\\nF1125 11:37:04.999486 6129 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller ini\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:08Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.295959 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:08Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.314714 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:08Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.316380 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.316415 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.316428 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.316446 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.316458 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:08Z","lastTransitionTime":"2025-11-25T11:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.327608 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:08Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.343259 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de18c07bf8490d7495947e9a271e3e7273b9ffdcc43afd2a0468394af0ae0b0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:08Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.354266 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l99rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14d69237-a4b7-43ea-ac81-f165eb532669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l99rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:08Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.367494 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:08Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.378604 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:08Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.393449 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc09de93-57e8-4697-8ce8-70bfc1b693e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daff2070c60f609fd06be9589e3cd8d304d131f7b9669c7be4b8e9178df8f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39eec3aac772cc9463505277d6b3f7cf2eb7621e4add4f14e53110e3db8c4cdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qkkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:08Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.394790 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/14d69237-a4b7-43ea-ac81-f165eb532669-metrics-certs\") pod \"network-metrics-daemon-l99rd\" (UID: \"14d69237-a4b7-43ea-ac81-f165eb532669\") " pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:08 crc kubenswrapper[4706]: E1125 11:37:08.394940 4706 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 11:37:08 crc kubenswrapper[4706]: E1125 11:37:08.395014 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14d69237-a4b7-43ea-ac81-f165eb532669-metrics-certs podName:14d69237-a4b7-43ea-ac81-f165eb532669 nodeName:}" failed. No retries permitted until 2025-11-25 11:37:09.394996802 +0000 UTC m=+38.309554183 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/14d69237-a4b7-43ea-ac81-f165eb532669-metrics-certs") pod "network-metrics-daemon-l99rd" (UID: "14d69237-a4b7-43ea-ac81-f165eb532669") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.407748 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:08Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.418665 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.418702 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.418735 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.418756 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.418767 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:08Z","lastTransitionTime":"2025-11-25T11:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.521842 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.521875 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.521884 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.521902 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.521912 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:08Z","lastTransitionTime":"2025-11-25T11:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.624418 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.624730 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.624842 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.624959 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.625061 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:08Z","lastTransitionTime":"2025-11-25T11:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.727948 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.728234 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.728383 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.728500 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.728609 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:08Z","lastTransitionTime":"2025-11-25T11:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.832232 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.832336 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.832351 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.832371 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.832385 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:08Z","lastTransitionTime":"2025-11-25T11:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.935132 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.935219 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.935238 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.935259 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:08 crc kubenswrapper[4706]: I1125 11:37:08.935272 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:08Z","lastTransitionTime":"2025-11-25T11:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.038115 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.038161 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.038171 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.038194 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.038206 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:09Z","lastTransitionTime":"2025-11-25T11:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.140928 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.140981 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.141015 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.141036 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.141051 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:09Z","lastTransitionTime":"2025-11-25T11:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.243387 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.243437 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.243446 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.243464 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.243473 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:09Z","lastTransitionTime":"2025-11-25T11:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.346019 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.346082 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.346093 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.346110 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.346121 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:09Z","lastTransitionTime":"2025-11-25T11:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.404839 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/14d69237-a4b7-43ea-ac81-f165eb532669-metrics-certs\") pod \"network-metrics-daemon-l99rd\" (UID: \"14d69237-a4b7-43ea-ac81-f165eb532669\") " pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:09 crc kubenswrapper[4706]: E1125 11:37:09.405043 4706 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 11:37:09 crc kubenswrapper[4706]: E1125 11:37:09.405122 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14d69237-a4b7-43ea-ac81-f165eb532669-metrics-certs podName:14d69237-a4b7-43ea-ac81-f165eb532669 nodeName:}" failed. No retries permitted until 2025-11-25 11:37:11.405101732 +0000 UTC m=+40.319659113 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/14d69237-a4b7-43ea-ac81-f165eb532669-metrics-certs") pod "network-metrics-daemon-l99rd" (UID: "14d69237-a4b7-43ea-ac81-f165eb532669") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.451692 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.451748 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.451759 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.451778 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.451807 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:09Z","lastTransitionTime":"2025-11-25T11:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.554426 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.554486 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.554497 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.554520 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.554535 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:09Z","lastTransitionTime":"2025-11-25T11:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.657762 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.657841 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.657855 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.657877 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.657892 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:09Z","lastTransitionTime":"2025-11-25T11:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.761062 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.761111 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.761123 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.761592 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.761612 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:09Z","lastTransitionTime":"2025-11-25T11:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.865032 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.865075 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.865084 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.865102 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.865113 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:09Z","lastTransitionTime":"2025-11-25T11:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.910080 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.921576 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.921624 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.921600 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:37:09 crc kubenswrapper[4706]: E1125 11:37:09.921733 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:37:09 crc kubenswrapper[4706]: E1125 11:37:09.921804 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.921848 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:09 crc kubenswrapper[4706]: E1125 11:37:09.921908 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:37:09 crc kubenswrapper[4706]: E1125 11:37:09.922121 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.931038 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:09Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.945942 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:09Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.962027 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de18c07bf8490d7495947e9a271e3e7273b9ffdcc43afd2a0468394af0ae0b0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:09Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.967669 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.967742 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.967761 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.967788 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.967803 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:09Z","lastTransitionTime":"2025-11-25T11:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.974526 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l99rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14d69237-a4b7-43ea-ac81-f165eb532669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l99rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:09Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:09 crc kubenswrapper[4706]: I1125 11:37:09.988484 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:09Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.001809 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:09Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.014792 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:10Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.030574 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc09de93-57e8-4697-8ce8-70bfc1b693e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daff2070c60f609fd06be9589e3cd8d304d131f7b9669c7be4b8e9178df8f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39eec3aac772cc9463505277d6b3f7cf2eb7621e4add4f14e53110e3db8c4cdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qkkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:10Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.042720 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:10Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.053075 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ec2e656-a68d-4339-92d5-0c157f7f7783\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a1481dd8cb88b79d8addfbfd40caf18850769e4492c2af316105b7f6779f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w54mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:10Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.066180 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:10Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.069865 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.069907 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.069915 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.069933 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.069943 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:10Z","lastTransitionTime":"2025-11-25T11:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.079873 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:10Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.093720 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:10Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.107731 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:10Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.122419 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:10Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.138046 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:10Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.160449 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408d84ea146425bb2b2ac6cfb181cd139a8465caa12eb3d4b0e2b738d1f52484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1486d0475f4d248f425b711ee757032370a9bdddb8d33c83ba9db41549d1dd9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:03Z\\\",\\\"message\\\":\\\"ler/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 11:37:03.877645 5942 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 11:37:03.877722 5942 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 11:37:03.877827 5942 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 11:37:03.877916 5942 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1125 11:37:03.878321 5942 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 11:37:03.878375 5942 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 11:37:03.878382 5942 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 11:37:03.878437 5942 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 11:37:03.878443 5942 factory.go:656] Stopping watch factory\\\\nI1125 11:37:03.878453 5942 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 11:37:03.878465 5942 ovnkube.go:599] Stopped ovnkube\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408d84ea146425bb2b2ac6cfb181cd139a8465caa12eb3d4b0e2b738d1f52484\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:05Z\\\",\\\"message\\\":\\\"alse]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 11:37:04.999467 6129 obj_retry.go:409] Going to retry *v1.Pod resource setup for 14 objects: [openshift-kube-controller-manager/kube-controller-manager-crc openshift-machine-config-operator/machine-config-daemon-dhfpm openshift-multus/multus-additional-cni-plugins-cjmvf openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-dns/node-resolver-nh9sc openshift-image-registry/node-ca-lpc7s openshift-multus/multus-s47nr openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-q9rpr openshift-etcd/etcd-crc openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-node-identity/network-node-identity-vrzqb openshift-kube-apiserver/kube-apiserver-crc openshift-network-operator/iptables-alerter-4ln5h]\\\\nF1125 11:37:04.999486 6129 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller ini\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:10Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.172408 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.172460 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.172471 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.172490 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.172506 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:10Z","lastTransitionTime":"2025-11-25T11:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.275337 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.275396 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.275407 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.275429 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.275443 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:10Z","lastTransitionTime":"2025-11-25T11:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.378041 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.378105 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.378118 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.378136 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.378146 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:10Z","lastTransitionTime":"2025-11-25T11:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.480625 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.480675 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.480684 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.480704 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.480716 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:10Z","lastTransitionTime":"2025-11-25T11:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.583428 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.583488 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.583500 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.583523 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.583537 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:10Z","lastTransitionTime":"2025-11-25T11:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.685825 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.685871 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.685889 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.685910 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.685967 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:10Z","lastTransitionTime":"2025-11-25T11:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.789091 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.789145 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.789155 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.789171 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.789184 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:10Z","lastTransitionTime":"2025-11-25T11:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.891670 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.891726 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.891738 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.891758 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.891769 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:10Z","lastTransitionTime":"2025-11-25T11:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.994985 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.995043 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.995055 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.995075 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:10 crc kubenswrapper[4706]: I1125 11:37:10.995088 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:10Z","lastTransitionTime":"2025-11-25T11:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.097757 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.097808 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.097818 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.097839 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.097851 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:11Z","lastTransitionTime":"2025-11-25T11:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.200316 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.200386 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.200396 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.200416 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.200444 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:11Z","lastTransitionTime":"2025-11-25T11:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.303457 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.303512 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.303524 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.303542 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.303553 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:11Z","lastTransitionTime":"2025-11-25T11:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.406427 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.406473 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.406482 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.406499 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.406509 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:11Z","lastTransitionTime":"2025-11-25T11:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.427057 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/14d69237-a4b7-43ea-ac81-f165eb532669-metrics-certs\") pod \"network-metrics-daemon-l99rd\" (UID: \"14d69237-a4b7-43ea-ac81-f165eb532669\") " pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:11 crc kubenswrapper[4706]: E1125 11:37:11.427395 4706 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 11:37:11 crc kubenswrapper[4706]: E1125 11:37:11.427572 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14d69237-a4b7-43ea-ac81-f165eb532669-metrics-certs podName:14d69237-a4b7-43ea-ac81-f165eb532669 nodeName:}" failed. No retries permitted until 2025-11-25 11:37:15.427534331 +0000 UTC m=+44.342091852 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/14d69237-a4b7-43ea-ac81-f165eb532669-metrics-certs") pod "network-metrics-daemon-l99rd" (UID: "14d69237-a4b7-43ea-ac81-f165eb532669") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.509763 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.509813 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.509822 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.509840 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.509853 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:11Z","lastTransitionTime":"2025-11-25T11:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.612632 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.612687 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.612700 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.612719 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.612733 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:11Z","lastTransitionTime":"2025-11-25T11:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.714988 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.715020 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.715027 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.715042 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.715054 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:11Z","lastTransitionTime":"2025-11-25T11:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.817402 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.817462 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.817474 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.817494 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.817508 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:11Z","lastTransitionTime":"2025-11-25T11:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.920775 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.920826 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.920838 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.920858 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.920876 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:11Z","lastTransitionTime":"2025-11-25T11:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.921296 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:37:11 crc kubenswrapper[4706]: E1125 11:37:11.921435 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.921923 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.921968 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:11 crc kubenswrapper[4706]: E1125 11:37:11.921984 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.922027 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:37:11 crc kubenswrapper[4706]: E1125 11:37:11.922080 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:37:11 crc kubenswrapper[4706]: E1125 11:37:11.922120 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.938228 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:11Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.952355 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:11Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.965765 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:11Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.980292 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc09de93-57e8-4697-8ce8-70bfc1b693e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daff2070c60f609fd06be9589e3cd8d304d131f7b9669c7be4b8e9178df8f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39eec3aac772cc9463505277d6b3f7cf2eb7621e4add4f14e53110e3db8c4cdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qkkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:11Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:11 crc kubenswrapper[4706]: I1125 11:37:11.995640 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:11Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.013291 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ec2e656-a68d-4339-92d5-0c157f7f7783\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a1481dd8cb88b79d8addfbfd40caf18850769e4492c2af316105b7f6779f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w54mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:12Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.023770 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.023825 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.023842 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.023869 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.023880 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:12Z","lastTransitionTime":"2025-11-25T11:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.033673 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408d84ea146425bb2b2ac6cfb181cd139a8465caa12eb3d4b0e2b738d1f52484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1486d0475f4d248f425b711ee757032370a9bdddb8d33c83ba9db41549d1dd9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:03Z\\\",\\\"message\\\":\\\"ler/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 11:37:03.877645 5942 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 11:37:03.877722 5942 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 11:37:03.877827 5942 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 11:37:03.877916 5942 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1125 11:37:03.878321 5942 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 11:37:03.878375 5942 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 11:37:03.878382 5942 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 11:37:03.878437 5942 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 11:37:03.878443 5942 factory.go:656] Stopping watch factory\\\\nI1125 11:37:03.878453 5942 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 11:37:03.878465 5942 ovnkube.go:599] Stopped ovnkube\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408d84ea146425bb2b2ac6cfb181cd139a8465caa12eb3d4b0e2b738d1f52484\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:05Z\\\",\\\"message\\\":\\\"alse]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 11:37:04.999467 6129 obj_retry.go:409] Going to retry *v1.Pod resource setup for 14 objects: [openshift-kube-controller-manager/kube-controller-manager-crc openshift-machine-config-operator/machine-config-daemon-dhfpm openshift-multus/multus-additional-cni-plugins-cjmvf openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-dns/node-resolver-nh9sc openshift-image-registry/node-ca-lpc7s openshift-multus/multus-s47nr openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-q9rpr openshift-etcd/etcd-crc openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-node-identity/network-node-identity-vrzqb openshift-kube-apiserver/kube-apiserver-crc openshift-network-operator/iptables-alerter-4ln5h]\\\\nF1125 11:37:04.999486 6129 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller ini\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:12Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.048808 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:12Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.063507 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:12Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.079867 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:12Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.095390 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:12Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.108188 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:12Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.123932 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:12Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.127158 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.127199 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.127209 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.127229 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.127242 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:12Z","lastTransitionTime":"2025-11-25T11:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.145456 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:12Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.158393 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:12Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.176757 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de18c07bf8490d7495947e9a271e3e7273b9ffdcc43afd2a0468394af0ae0b0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:12Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.192703 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l99rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14d69237-a4b7-43ea-ac81-f165eb532669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l99rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:12Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.229254 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.229328 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.229339 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.229362 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.229374 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:12Z","lastTransitionTime":"2025-11-25T11:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.331894 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.331938 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.331946 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.331966 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.331981 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:12Z","lastTransitionTime":"2025-11-25T11:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.434649 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.434717 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.434729 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.434749 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.434762 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:12Z","lastTransitionTime":"2025-11-25T11:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.537277 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.537355 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.537368 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.537389 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.537403 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:12Z","lastTransitionTime":"2025-11-25T11:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.639557 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.640036 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.640050 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.640070 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.640088 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:12Z","lastTransitionTime":"2025-11-25T11:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.743544 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.743611 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.743624 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.743651 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.743664 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:12Z","lastTransitionTime":"2025-11-25T11:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.846408 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.846466 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.846476 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.846495 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.846507 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:12Z","lastTransitionTime":"2025-11-25T11:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.949283 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.949355 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.949367 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.949388 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:12 crc kubenswrapper[4706]: I1125 11:37:12.949402 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:12Z","lastTransitionTime":"2025-11-25T11:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.052242 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.052322 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.052337 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.052357 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.052372 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:13Z","lastTransitionTime":"2025-11-25T11:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.154756 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.154805 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.154816 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.154837 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.154855 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:13Z","lastTransitionTime":"2025-11-25T11:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.257337 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.257373 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.257381 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.257398 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.257411 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:13Z","lastTransitionTime":"2025-11-25T11:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.359360 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.359409 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.359421 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.359439 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.359451 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:13Z","lastTransitionTime":"2025-11-25T11:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.461591 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.461635 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.461644 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.461660 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.461672 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:13Z","lastTransitionTime":"2025-11-25T11:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.564470 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.564518 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.564543 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.564570 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.564586 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:13Z","lastTransitionTime":"2025-11-25T11:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.666966 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.667010 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.667028 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.667049 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.667062 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:13Z","lastTransitionTime":"2025-11-25T11:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.770443 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.770502 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.770514 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.770537 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.770552 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:13Z","lastTransitionTime":"2025-11-25T11:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.873015 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.873084 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.873099 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.873121 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.873133 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:13Z","lastTransitionTime":"2025-11-25T11:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.921671 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.921752 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.921786 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:37:13 crc kubenswrapper[4706]: E1125 11:37:13.921848 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.921870 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:37:13 crc kubenswrapper[4706]: E1125 11:37:13.922033 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:37:13 crc kubenswrapper[4706]: E1125 11:37:13.922080 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:37:13 crc kubenswrapper[4706]: E1125 11:37:13.922252 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.976482 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.976525 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.976534 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.976555 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:13 crc kubenswrapper[4706]: I1125 11:37:13.976567 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:13Z","lastTransitionTime":"2025-11-25T11:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.079362 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.079421 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.079431 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.079451 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.079469 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:14Z","lastTransitionTime":"2025-11-25T11:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.181652 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.181714 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.181727 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.181755 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.181769 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:14Z","lastTransitionTime":"2025-11-25T11:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.288246 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.288324 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.288337 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.288357 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.288369 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:14Z","lastTransitionTime":"2025-11-25T11:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.391169 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.391242 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.391269 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.391320 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.391345 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:14Z","lastTransitionTime":"2025-11-25T11:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.494751 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.495114 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.495227 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.495366 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.495474 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:14Z","lastTransitionTime":"2025-11-25T11:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.598583 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.598658 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.598676 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.598705 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.598723 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:14Z","lastTransitionTime":"2025-11-25T11:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.701816 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.701878 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.701890 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.701911 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.701924 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:14Z","lastTransitionTime":"2025-11-25T11:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.804868 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.806340 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.806355 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.806373 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.806385 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:14Z","lastTransitionTime":"2025-11-25T11:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.908958 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.908997 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.909007 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.909023 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:14 crc kubenswrapper[4706]: I1125 11:37:14.909033 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:14Z","lastTransitionTime":"2025-11-25T11:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.011810 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.011876 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.011889 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.011910 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.011922 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:15Z","lastTransitionTime":"2025-11-25T11:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.114548 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.114592 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.114600 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.114618 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.114633 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:15Z","lastTransitionTime":"2025-11-25T11:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.217280 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.217346 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.217356 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.217375 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.217385 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:15Z","lastTransitionTime":"2025-11-25T11:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.320917 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.320981 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.320996 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.321019 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.321036 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:15Z","lastTransitionTime":"2025-11-25T11:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.424275 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.424337 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.424349 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.424367 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.424382 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:15Z","lastTransitionTime":"2025-11-25T11:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.463340 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/14d69237-a4b7-43ea-ac81-f165eb532669-metrics-certs\") pod \"network-metrics-daemon-l99rd\" (UID: \"14d69237-a4b7-43ea-ac81-f165eb532669\") " pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:15 crc kubenswrapper[4706]: E1125 11:37:15.463539 4706 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 11:37:15 crc kubenswrapper[4706]: E1125 11:37:15.463657 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14d69237-a4b7-43ea-ac81-f165eb532669-metrics-certs podName:14d69237-a4b7-43ea-ac81-f165eb532669 nodeName:}" failed. No retries permitted until 2025-11-25 11:37:23.463632753 +0000 UTC m=+52.378190334 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/14d69237-a4b7-43ea-ac81-f165eb532669-metrics-certs") pod "network-metrics-daemon-l99rd" (UID: "14d69237-a4b7-43ea-ac81-f165eb532669") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.471233 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.472215 4706 scope.go:117] "RemoveContainer" containerID="408d84ea146425bb2b2ac6cfb181cd139a8465caa12eb3d4b0e2b738d1f52484" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.485949 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l99rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14d69237-a4b7-43ea-ac81-f165eb532669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l99rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:15Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.508041 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:15Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.523747 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:15Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.527671 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.527699 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.527708 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.527725 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.527735 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:15Z","lastTransitionTime":"2025-11-25T11:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.543701 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de18c07bf8490d7495947e9a271e3e7273b9ffdcc43afd2a0468394af0ae0b0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:15Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.562438 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:15Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.577685 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:15Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.590581 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:15Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.604618 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc09de93-57e8-4697-8ce8-70bfc1b693e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daff2070c60f609fd06be9589e3cd8d304d131f7b9669c7be4b8e9178df8f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39eec3aac772cc9463505277d6b3f7cf2eb7621e4add4f14e53110e3db8c4cdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qkkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:15Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.624854 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:15Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.630848 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.630906 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.630918 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.630955 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.630968 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:15Z","lastTransitionTime":"2025-11-25T11:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.641617 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ec2e656-a68d-4339-92d5-0c157f7f7783\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a1481dd8cb88b79d8addfbfd40caf18850769e4492c2af316105b7f6779f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w54mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:15Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.656816 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:15Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.673919 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:15Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.691791 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:15Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.716415 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408d84ea146425bb2b2ac6cfb181cd139a8465caa12eb3d4b0e2b738d1f52484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408d84ea146425bb2b2ac6cfb181cd139a8465caa12eb3d4b0e2b738d1f52484\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:05Z\\\",\\\"message\\\":\\\"alse]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 11:37:04.999467 6129 obj_retry.go:409] Going to retry *v1.Pod resource setup for 14 objects: [openshift-kube-controller-manager/kube-controller-manager-crc openshift-machine-config-operator/machine-config-daemon-dhfpm openshift-multus/multus-additional-cni-plugins-cjmvf openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-dns/node-resolver-nh9sc openshift-image-registry/node-ca-lpc7s openshift-multus/multus-s47nr openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-q9rpr openshift-etcd/etcd-crc openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-node-identity/network-node-identity-vrzqb openshift-kube-apiserver/kube-apiserver-crc openshift-network-operator/iptables-alerter-4ln5h]\\\\nF1125 11:37:04.999486 6129 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller ini\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-q9rpr_openshift-ovn-kubernetes(f1218bae-4153-4490-8847-ab2d07ca0ab6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:15Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.731131 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:15Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.733146 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.733191 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.733205 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.733226 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.733240 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:15Z","lastTransitionTime":"2025-11-25T11:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.746166 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:15Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.760034 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:15Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.836389 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.836440 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.836451 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.836472 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.836485 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:15Z","lastTransitionTime":"2025-11-25T11:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.921541 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:37:15 crc kubenswrapper[4706]: E1125 11:37:15.921713 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.922148 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:37:15 crc kubenswrapper[4706]: E1125 11:37:15.922197 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.922251 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:15 crc kubenswrapper[4706]: E1125 11:37:15.922332 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.922472 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:37:15 crc kubenswrapper[4706]: E1125 11:37:15.922528 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.938946 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.938986 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.938995 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.939011 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:15 crc kubenswrapper[4706]: I1125 11:37:15.939022 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:15Z","lastTransitionTime":"2025-11-25T11:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.042377 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.042417 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.042425 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.042441 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.042450 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:16Z","lastTransitionTime":"2025-11-25T11:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.145929 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.145980 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.145995 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.146016 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.146030 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:16Z","lastTransitionTime":"2025-11-25T11:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.189366 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9rpr_f1218bae-4153-4490-8847-ab2d07ca0ab6/ovnkube-controller/1.log" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.193167 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" event={"ID":"f1218bae-4153-4490-8847-ab2d07ca0ab6","Type":"ContainerStarted","Data":"67aac9b1fc77bcf7bb71812ee95214930edbb62bf5efb82d5128c53fd392a346"} Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.193682 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.210636 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:16Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.222594 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:16Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.235172 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:16Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.248613 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.248652 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.248661 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.248678 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.248688 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:16Z","lastTransitionTime":"2025-11-25T11:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.253072 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc09de93-57e8-4697-8ce8-70bfc1b693e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daff2070c60f609fd06be9589e3cd8d304d131f7b9669c7be4b8e9178df8f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39eec3aac772cc9463505277d6b3f7cf2eb7621e4add4f14e53110e3db8c4cdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qkkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:16Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.268880 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:16Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.284665 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ec2e656-a68d-4339-92d5-0c157f7f7783\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a1481dd8cb88b79d8addfbfd40caf18850769e4492c2af316105b7f6779f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w54mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:16Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.298480 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:16Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.311208 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:16Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.325178 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:16Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.343678 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67aac9b1fc77bcf7bb71812ee95214930edbb62bf5efb82d5128c53fd392a346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408d84ea146425bb2b2ac6cfb181cd139a8465caa12eb3d4b0e2b738d1f52484\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:05Z\\\",\\\"message\\\":\\\"alse]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 11:37:04.999467 6129 obj_retry.go:409] Going to retry *v1.Pod resource setup for 14 objects: [openshift-kube-controller-manager/kube-controller-manager-crc openshift-machine-config-operator/machine-config-daemon-dhfpm openshift-multus/multus-additional-cni-plugins-cjmvf openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-dns/node-resolver-nh9sc openshift-image-registry/node-ca-lpc7s openshift-multus/multus-s47nr openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-q9rpr openshift-etcd/etcd-crc openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-node-identity/network-node-identity-vrzqb openshift-kube-apiserver/kube-apiserver-crc openshift-network-operator/iptables-alerter-4ln5h]\\\\nF1125 11:37:04.999486 6129 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller ini\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:16Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.351624 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.351670 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.351678 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.351697 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.351708 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:16Z","lastTransitionTime":"2025-11-25T11:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.355828 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:16Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.369103 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:16Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.383879 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:16Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.396078 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l99rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14d69237-a4b7-43ea-ac81-f165eb532669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l99rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:16Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.415421 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:16Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.428279 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:16Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.441930 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de18c07bf8490d7495947e9a271e3e7273b9ffdcc43afd2a0468394af0ae0b0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:16Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.453882 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.454015 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.454030 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.454053 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.454068 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:16Z","lastTransitionTime":"2025-11-25T11:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.556947 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.557009 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.557021 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.557070 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.557087 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:16Z","lastTransitionTime":"2025-11-25T11:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.659924 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.659956 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.659966 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.659982 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.659992 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:16Z","lastTransitionTime":"2025-11-25T11:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.762193 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.762256 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.762268 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.762286 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.762314 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:16Z","lastTransitionTime":"2025-11-25T11:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.865437 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.865496 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.865509 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.865531 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.865544 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:16Z","lastTransitionTime":"2025-11-25T11:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.968179 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.968272 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.968286 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.968340 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:16 crc kubenswrapper[4706]: I1125 11:37:16.968354 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:16Z","lastTransitionTime":"2025-11-25T11:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.071489 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.071543 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.071554 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.071573 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.071587 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:17Z","lastTransitionTime":"2025-11-25T11:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.174449 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.174510 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.174523 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.174548 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.174564 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:17Z","lastTransitionTime":"2025-11-25T11:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.200510 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9rpr_f1218bae-4153-4490-8847-ab2d07ca0ab6/ovnkube-controller/2.log" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.201428 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9rpr_f1218bae-4153-4490-8847-ab2d07ca0ab6/ovnkube-controller/1.log" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.204464 4706 generic.go:334] "Generic (PLEG): container finished" podID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerID="67aac9b1fc77bcf7bb71812ee95214930edbb62bf5efb82d5128c53fd392a346" exitCode=1 Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.204539 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" event={"ID":"f1218bae-4153-4490-8847-ab2d07ca0ab6","Type":"ContainerDied","Data":"67aac9b1fc77bcf7bb71812ee95214930edbb62bf5efb82d5128c53fd392a346"} Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.204609 4706 scope.go:117] "RemoveContainer" containerID="408d84ea146425bb2b2ac6cfb181cd139a8465caa12eb3d4b0e2b738d1f52484" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.205404 4706 scope.go:117] "RemoveContainer" containerID="67aac9b1fc77bcf7bb71812ee95214930edbb62bf5efb82d5128c53fd392a346" Nov 25 11:37:17 crc kubenswrapper[4706]: E1125 11:37:17.205810 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-q9rpr_openshift-ovn-kubernetes(f1218bae-4153-4490-8847-ab2d07ca0ab6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.220724 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ec2e656-a68d-4339-92d5-0c157f7f7783\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a1481dd8cb88b79d8addfbfd40caf18850769e4492c2af316105b7f6779f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w54mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:17Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.236710 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:17Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.255542 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:17Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.271855 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:17Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.276856 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.276894 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.276905 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.276925 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.276936 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:17Z","lastTransitionTime":"2025-11-25T11:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.287555 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:17Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.303825 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:17Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.318843 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:17Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.335753 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67aac9b1fc77bcf7bb71812ee95214930edbb62bf5efb82d5128c53fd392a346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408d84ea146425bb2b2ac6cfb181cd139a8465caa12eb3d4b0e2b738d1f52484\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:05Z\\\",\\\"message\\\":\\\"alse]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 11:37:04.999467 6129 obj_retry.go:409] Going to retry *v1.Pod resource setup for 14 objects: [openshift-kube-controller-manager/kube-controller-manager-crc openshift-machine-config-operator/machine-config-daemon-dhfpm openshift-multus/multus-additional-cni-plugins-cjmvf openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-dns/node-resolver-nh9sc openshift-image-registry/node-ca-lpc7s openshift-multus/multus-s47nr openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-q9rpr openshift-etcd/etcd-crc openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-node-identity/network-node-identity-vrzqb openshift-kube-apiserver/kube-apiserver-crc openshift-network-operator/iptables-alerter-4ln5h]\\\\nF1125 11:37:04.999486 6129 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller ini\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67aac9b1fc77bcf7bb71812ee95214930edbb62bf5efb82d5128c53fd392a346\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:16Z\\\",\\\"message\\\":\\\"e Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 11:37:16.268126 6342 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1125 11:37:16.268101 6342 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:17Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.348095 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:17Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.365701 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:17Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.378432 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:17Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.379384 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.379415 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.379426 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.379443 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.379452 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:17Z","lastTransitionTime":"2025-11-25T11:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.394370 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de18c07bf8490d7495947e9a271e3e7273b9ffdcc43afd2a0468394af0ae0b0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:17Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.406399 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l99rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14d69237-a4b7-43ea-ac81-f165eb532669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l99rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:17Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.418059 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:17Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.429545 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:17Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.441240 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc09de93-57e8-4697-8ce8-70bfc1b693e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daff2070c60f609fd06be9589e3cd8d304d131f7b9669c7be4b8e9178df8f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39eec3aac772cc9463505277d6b3f7cf2eb7621e4add4f14e53110e3db8c4cdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qkkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:17Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.455328 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:17Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.482541 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.482584 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.482595 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.482613 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.482626 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:17Z","lastTransitionTime":"2025-11-25T11:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.585277 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.585611 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.585769 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.585895 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.586003 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:17Z","lastTransitionTime":"2025-11-25T11:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.689005 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.689074 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.689090 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.689113 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.689127 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:17Z","lastTransitionTime":"2025-11-25T11:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.791639 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.791901 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.791965 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.792037 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.792161 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:17Z","lastTransitionTime":"2025-11-25T11:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.894271 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.894355 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.894367 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.894387 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.894402 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:17Z","lastTransitionTime":"2025-11-25T11:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.921745 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.921807 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.921758 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.921760 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:37:17 crc kubenswrapper[4706]: E1125 11:37:17.921938 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:37:17 crc kubenswrapper[4706]: E1125 11:37:17.922101 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:37:17 crc kubenswrapper[4706]: E1125 11:37:17.922226 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:37:17 crc kubenswrapper[4706]: E1125 11:37:17.922324 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.996921 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.996962 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.996971 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.996988 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:17 crc kubenswrapper[4706]: I1125 11:37:17.996999 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:17Z","lastTransitionTime":"2025-11-25T11:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.096131 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.096173 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.096184 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.096205 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.096218 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:18Z","lastTransitionTime":"2025-11-25T11:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:18 crc kubenswrapper[4706]: E1125 11:37:18.110926 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:18Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.114689 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.114727 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.114736 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.114753 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.114765 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:18Z","lastTransitionTime":"2025-11-25T11:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:18 crc kubenswrapper[4706]: E1125 11:37:18.127855 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:18Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.131844 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.131986 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.132108 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.132194 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.132271 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:18Z","lastTransitionTime":"2025-11-25T11:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:18 crc kubenswrapper[4706]: E1125 11:37:18.144541 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:18Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.149146 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.149193 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.149203 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.149222 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.149234 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:18Z","lastTransitionTime":"2025-11-25T11:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:18 crc kubenswrapper[4706]: E1125 11:37:18.163280 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:18Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.167795 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.167845 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.167857 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.167876 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.167887 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:18Z","lastTransitionTime":"2025-11-25T11:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:18 crc kubenswrapper[4706]: E1125 11:37:18.183398 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:18Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:18 crc kubenswrapper[4706]: E1125 11:37:18.183832 4706 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.186107 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.186406 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.186477 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.186581 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.186773 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:18Z","lastTransitionTime":"2025-11-25T11:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.209765 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9rpr_f1218bae-4153-4490-8847-ab2d07ca0ab6/ovnkube-controller/2.log" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.212832 4706 scope.go:117] "RemoveContainer" containerID="67aac9b1fc77bcf7bb71812ee95214930edbb62bf5efb82d5128c53fd392a346" Nov 25 11:37:18 crc kubenswrapper[4706]: E1125 11:37:18.212987 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-q9rpr_openshift-ovn-kubernetes(f1218bae-4153-4490-8847-ab2d07ca0ab6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.236753 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:18Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.251247 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:18Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.264129 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:18Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.277041 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:18Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.289365 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:18Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.290068 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.290105 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.290115 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.290136 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.290151 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:18Z","lastTransitionTime":"2025-11-25T11:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.303566 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:18Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.324199 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67aac9b1fc77bcf7bb71812ee95214930edbb62bf5efb82d5128c53fd392a346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67aac9b1fc77bcf7bb71812ee95214930edbb62bf5efb82d5128c53fd392a346\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:16Z\\\",\\\"message\\\":\\\"e Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 11:37:16.268126 6342 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1125 11:37:16.268101 6342 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-q9rpr_openshift-ovn-kubernetes(f1218bae-4153-4490-8847-ab2d07ca0ab6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:18Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.348066 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:18Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.362874 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:18Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.377607 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de18c07bf8490d7495947e9a271e3e7273b9ffdcc43afd2a0468394af0ae0b0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:18Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.389814 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l99rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14d69237-a4b7-43ea-ac81-f165eb532669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l99rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:18Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.392657 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.392693 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.392705 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.392725 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.392737 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:18Z","lastTransitionTime":"2025-11-25T11:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.403779 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:18Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.418547 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:18Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.428033 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:18Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.437451 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc09de93-57e8-4697-8ce8-70bfc1b693e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daff2070c60f609fd06be9589e3cd8d304d131f7b9669c7be4b8e9178df8f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39eec3aac772cc9463505277d6b3f7cf2eb7621e4add4f14e53110e3db8c4cdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qkkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:18Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.450891 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:18Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.463257 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ec2e656-a68d-4339-92d5-0c157f7f7783\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a1481dd8cb88b79d8addfbfd40caf18850769e4492c2af316105b7f6779f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w54mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:18Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.495533 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.495581 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.495593 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.495612 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.495626 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:18Z","lastTransitionTime":"2025-11-25T11:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.598334 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.598458 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.598477 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.598496 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.598514 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:18Z","lastTransitionTime":"2025-11-25T11:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.700904 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.700944 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.700953 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.700974 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.700984 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:18Z","lastTransitionTime":"2025-11-25T11:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.804075 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.804131 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.804144 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.804163 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.804174 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:18Z","lastTransitionTime":"2025-11-25T11:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.906194 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.906235 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.906250 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.906466 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:18 crc kubenswrapper[4706]: I1125 11:37:18.906479 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:18Z","lastTransitionTime":"2025-11-25T11:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.008962 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.008989 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.008998 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.009013 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.009023 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:19Z","lastTransitionTime":"2025-11-25T11:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.111363 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.111415 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.111426 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.111448 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.111459 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:19Z","lastTransitionTime":"2025-11-25T11:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.214750 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.214795 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.214808 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.214831 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.214845 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:19Z","lastTransitionTime":"2025-11-25T11:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.317765 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.317837 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.317855 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.317879 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.317895 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:19Z","lastTransitionTime":"2025-11-25T11:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.421063 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.421322 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.421394 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.421462 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.421533 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:19Z","lastTransitionTime":"2025-11-25T11:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.523702 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.523757 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.523773 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.523798 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.523816 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:19Z","lastTransitionTime":"2025-11-25T11:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.626645 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.626702 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.626720 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.626741 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.626753 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:19Z","lastTransitionTime":"2025-11-25T11:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.729344 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.729387 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.729396 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.729412 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.729426 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:19Z","lastTransitionTime":"2025-11-25T11:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.832469 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.832840 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.832977 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.833115 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.833247 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:19Z","lastTransitionTime":"2025-11-25T11:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.921663 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.921660 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:37:19 crc kubenswrapper[4706]: E1125 11:37:19.922121 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.921660 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:19 crc kubenswrapper[4706]: E1125 11:37:19.922248 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.921681 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:37:19 crc kubenswrapper[4706]: E1125 11:37:19.922461 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:37:19 crc kubenswrapper[4706]: E1125 11:37:19.922330 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.936060 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.936313 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.936405 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.936493 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:19 crc kubenswrapper[4706]: I1125 11:37:19.936565 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:19Z","lastTransitionTime":"2025-11-25T11:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.039580 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.039615 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.039623 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.039640 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.039651 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:20Z","lastTransitionTime":"2025-11-25T11:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.141978 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.142020 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.142031 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.142050 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.142062 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:20Z","lastTransitionTime":"2025-11-25T11:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.244803 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.244856 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.244868 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.244887 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.244900 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:20Z","lastTransitionTime":"2025-11-25T11:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.347810 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.347865 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.347882 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.347904 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.347916 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:20Z","lastTransitionTime":"2025-11-25T11:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.450668 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.450709 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.450720 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.450740 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.450750 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:20Z","lastTransitionTime":"2025-11-25T11:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.553621 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.554213 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.554278 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.554363 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.554439 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:20Z","lastTransitionTime":"2025-11-25T11:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.656899 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.656944 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.656954 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.656973 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.656987 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:20Z","lastTransitionTime":"2025-11-25T11:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.758978 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.759030 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.759045 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.759069 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.759083 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:20Z","lastTransitionTime":"2025-11-25T11:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.860973 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.861008 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.861019 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.861040 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.861050 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:20Z","lastTransitionTime":"2025-11-25T11:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.963942 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.963991 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.964003 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.964021 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:20 crc kubenswrapper[4706]: I1125 11:37:20.964039 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:20Z","lastTransitionTime":"2025-11-25T11:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.067040 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.067331 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.067451 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.067531 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.067596 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:21Z","lastTransitionTime":"2025-11-25T11:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.169443 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.169507 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.169520 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.169544 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.169558 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:21Z","lastTransitionTime":"2025-11-25T11:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.272331 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.272391 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.272403 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.272425 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.272444 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:21Z","lastTransitionTime":"2025-11-25T11:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.375425 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.375462 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.375473 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.375491 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.375501 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:21Z","lastTransitionTime":"2025-11-25T11:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.478621 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.478678 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.478687 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.478705 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.478715 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:21Z","lastTransitionTime":"2025-11-25T11:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.581372 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.581424 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.581443 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.581468 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.581480 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:21Z","lastTransitionTime":"2025-11-25T11:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.684134 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.684194 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.684208 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.684229 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.684243 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:21Z","lastTransitionTime":"2025-11-25T11:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.787088 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.787153 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.787165 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.787185 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.787198 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:21Z","lastTransitionTime":"2025-11-25T11:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.890469 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.890999 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.891025 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.891055 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.891075 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:21Z","lastTransitionTime":"2025-11-25T11:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.922047 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.922126 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.922175 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.922288 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:37:21 crc kubenswrapper[4706]: E1125 11:37:21.922293 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:37:21 crc kubenswrapper[4706]: E1125 11:37:21.922433 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:37:21 crc kubenswrapper[4706]: E1125 11:37:21.922548 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:37:21 crc kubenswrapper[4706]: E1125 11:37:21.922664 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.953844 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:21Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.968884 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:21Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.985772 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de18c07bf8490d7495947e9a271e3e7273b9ffdcc43afd2a0468394af0ae0b0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:21Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.993410 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.993445 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.993453 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.993469 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:21 crc kubenswrapper[4706]: I1125 11:37:21.993479 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:21Z","lastTransitionTime":"2025-11-25T11:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.003074 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l99rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14d69237-a4b7-43ea-ac81-f165eb532669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l99rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:22Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.022020 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:22Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.035635 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:22Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.047533 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:22Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.059773 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc09de93-57e8-4697-8ce8-70bfc1b693e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daff2070c60f609fd06be9589e3cd8d304d131f7b9669c7be4b8e9178df8f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39eec3aac772cc9463505277d6b3f7cf2eb7621e4add4f14e53110e3db8c4cdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qkkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:22Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.072705 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:22Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.084884 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ec2e656-a68d-4339-92d5-0c157f7f7783\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a1481dd8cb88b79d8addfbfd40caf18850769e4492c2af316105b7f6779f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w54mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:22Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.095822 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.095895 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.095906 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.095925 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.095942 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:22Z","lastTransitionTime":"2025-11-25T11:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.100040 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:22Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.116244 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:22Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.133360 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:22Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.149586 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:22Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.163857 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:22Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.177026 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:22Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.196769 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67aac9b1fc77bcf7bb71812ee95214930edbb62bf5efb82d5128c53fd392a346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67aac9b1fc77bcf7bb71812ee95214930edbb62bf5efb82d5128c53fd392a346\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:16Z\\\",\\\"message\\\":\\\"e Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 11:37:16.268126 6342 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1125 11:37:16.268101 6342 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-q9rpr_openshift-ovn-kubernetes(f1218bae-4153-4490-8847-ab2d07ca0ab6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:22Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.197994 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.198037 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.198055 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.198077 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.198111 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:22Z","lastTransitionTime":"2025-11-25T11:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.301949 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.302012 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.302027 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.302051 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.302066 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:22Z","lastTransitionTime":"2025-11-25T11:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.404824 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.404866 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.404876 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.404894 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.404906 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:22Z","lastTransitionTime":"2025-11-25T11:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.507678 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.507714 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.507723 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.507739 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.507749 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:22Z","lastTransitionTime":"2025-11-25T11:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.611074 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.611139 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.611153 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.611173 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.611186 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:22Z","lastTransitionTime":"2025-11-25T11:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.714284 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.714345 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.714355 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.714370 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.714383 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:22Z","lastTransitionTime":"2025-11-25T11:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.816913 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.816958 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.816967 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.816987 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.817000 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:22Z","lastTransitionTime":"2025-11-25T11:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.824163 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.833093 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.838030 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:22Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.851715 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ec2e656-a68d-4339-92d5-0c157f7f7783\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a1481dd8cb88b79d8addfbfd40caf18850769e4492c2af316105b7f6779f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w54mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:22Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.873729 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67aac9b1fc77bcf7bb71812ee95214930edbb62bf5efb82d5128c53fd392a346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67aac9b1fc77bcf7bb71812ee95214930edbb62bf5efb82d5128c53fd392a346\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:16Z\\\",\\\"message\\\":\\\"e Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 11:37:16.268126 6342 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1125 11:37:16.268101 6342 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-q9rpr_openshift-ovn-kubernetes(f1218bae-4153-4490-8847-ab2d07ca0ab6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:22Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.889532 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:22Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.903497 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:22Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.916494 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:22Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.919515 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.919648 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.919664 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.919684 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.919697 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:22Z","lastTransitionTime":"2025-11-25T11:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.930924 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:22Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.945798 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:22Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.960938 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:22Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.981862 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:22Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:22 crc kubenswrapper[4706]: I1125 11:37:22.998752 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:22Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.017502 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de18c07bf8490d7495947e9a271e3e7273b9ffdcc43afd2a0468394af0ae0b0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:23Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.021938 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.022000 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.022016 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.022037 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.022052 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:23Z","lastTransitionTime":"2025-11-25T11:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.028522 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l99rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14d69237-a4b7-43ea-ac81-f165eb532669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l99rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:23Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.042906 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:23Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.054465 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:23Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.065967 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:23Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.079598 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc09de93-57e8-4697-8ce8-70bfc1b693e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daff2070c60f609fd06be9589e3cd8d304d131f7b9669c7be4b8e9178df8f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39eec3aac772cc9463505277d6b3f7cf2eb7621e4add4f14e53110e3db8c4cdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qkkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:23Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.124419 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.124468 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.124480 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.124500 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.124511 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:23Z","lastTransitionTime":"2025-11-25T11:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.227085 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.227132 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.227143 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.227164 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.227175 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:23Z","lastTransitionTime":"2025-11-25T11:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.330022 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.330103 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.330146 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.330169 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.330181 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:23Z","lastTransitionTime":"2025-11-25T11:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.433330 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.433381 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.433394 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.433413 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.433426 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:23Z","lastTransitionTime":"2025-11-25T11:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.536355 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.536403 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.536416 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.536435 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.536450 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:23Z","lastTransitionTime":"2025-11-25T11:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.548905 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/14d69237-a4b7-43ea-ac81-f165eb532669-metrics-certs\") pod \"network-metrics-daemon-l99rd\" (UID: \"14d69237-a4b7-43ea-ac81-f165eb532669\") " pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:23 crc kubenswrapper[4706]: E1125 11:37:23.549092 4706 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 11:37:23 crc kubenswrapper[4706]: E1125 11:37:23.549153 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14d69237-a4b7-43ea-ac81-f165eb532669-metrics-certs podName:14d69237-a4b7-43ea-ac81-f165eb532669 nodeName:}" failed. No retries permitted until 2025-11-25 11:37:39.549133718 +0000 UTC m=+68.463691119 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/14d69237-a4b7-43ea-ac81-f165eb532669-metrics-certs") pod "network-metrics-daemon-l99rd" (UID: "14d69237-a4b7-43ea-ac81-f165eb532669") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.640491 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.640549 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.640564 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.640589 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.640604 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:23Z","lastTransitionTime":"2025-11-25T11:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.743436 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.743483 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.743494 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.743511 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.743522 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:23Z","lastTransitionTime":"2025-11-25T11:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.751403 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.751552 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.751587 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:37:23 crc kubenswrapper[4706]: E1125 11:37:23.751617 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:37:55.751586843 +0000 UTC m=+84.666144224 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:37:23 crc kubenswrapper[4706]: E1125 11:37:23.751724 4706 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.751749 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:37:23 crc kubenswrapper[4706]: E1125 11:37:23.751797 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 11:37:55.751776968 +0000 UTC m=+84.666334339 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 11:37:23 crc kubenswrapper[4706]: E1125 11:37:23.751847 4706 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 11:37:23 crc kubenswrapper[4706]: E1125 11:37:23.751886 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 11:37:55.751878551 +0000 UTC m=+84.666436162 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 11:37:23 crc kubenswrapper[4706]: E1125 11:37:23.751895 4706 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 11:37:23 crc kubenswrapper[4706]: E1125 11:37:23.751911 4706 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 11:37:23 crc kubenswrapper[4706]: E1125 11:37:23.751922 4706 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 11:37:23 crc kubenswrapper[4706]: E1125 11:37:23.751949 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 11:37:55.751941132 +0000 UTC m=+84.666498513 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.846137 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.846192 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.846202 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.846223 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.846238 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:23Z","lastTransitionTime":"2025-11-25T11:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.853213 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:37:23 crc kubenswrapper[4706]: E1125 11:37:23.853519 4706 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 11:37:23 crc kubenswrapper[4706]: E1125 11:37:23.853567 4706 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 11:37:23 crc kubenswrapper[4706]: E1125 11:37:23.853584 4706 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 11:37:23 crc kubenswrapper[4706]: E1125 11:37:23.853666 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 11:37:55.853642583 +0000 UTC m=+84.768200154 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.922158 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.922265 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.922373 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:37:23 crc kubenswrapper[4706]: E1125 11:37:23.922320 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.922462 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:37:23 crc kubenswrapper[4706]: E1125 11:37:23.922522 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:37:23 crc kubenswrapper[4706]: E1125 11:37:23.922662 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:37:23 crc kubenswrapper[4706]: E1125 11:37:23.922782 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.949055 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.949138 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.949165 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.949200 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:23 crc kubenswrapper[4706]: I1125 11:37:23.949222 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:23Z","lastTransitionTime":"2025-11-25T11:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.051795 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.051862 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.051875 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.051895 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.051909 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:24Z","lastTransitionTime":"2025-11-25T11:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.154979 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.155031 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.155043 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.155066 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.155078 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:24Z","lastTransitionTime":"2025-11-25T11:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.258076 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.258121 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.258134 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.258151 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.258163 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:24Z","lastTransitionTime":"2025-11-25T11:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.361859 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.361935 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.361947 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.361967 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.362009 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:24Z","lastTransitionTime":"2025-11-25T11:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.464664 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.464735 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.464748 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.464769 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.464785 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:24Z","lastTransitionTime":"2025-11-25T11:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.566793 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.566840 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.566852 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.566871 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.566881 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:24Z","lastTransitionTime":"2025-11-25T11:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.670007 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.670071 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.670082 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.670103 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.670116 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:24Z","lastTransitionTime":"2025-11-25T11:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.772637 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.772679 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.772691 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.772709 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.772724 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:24Z","lastTransitionTime":"2025-11-25T11:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.875362 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.875411 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.875423 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.875444 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.875456 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:24Z","lastTransitionTime":"2025-11-25T11:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.977484 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.977522 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.977531 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.977546 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:24 crc kubenswrapper[4706]: I1125 11:37:24.977555 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:24Z","lastTransitionTime":"2025-11-25T11:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.080440 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.080492 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.080508 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.080528 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.080540 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:25Z","lastTransitionTime":"2025-11-25T11:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.183507 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.183549 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.183559 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.183579 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.183590 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:25Z","lastTransitionTime":"2025-11-25T11:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.285452 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.285503 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.285514 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.285532 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.285542 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:25Z","lastTransitionTime":"2025-11-25T11:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.388499 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.388569 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.388580 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.388600 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.388613 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:25Z","lastTransitionTime":"2025-11-25T11:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.492238 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.492294 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.492320 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.492343 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.492358 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:25Z","lastTransitionTime":"2025-11-25T11:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.595489 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.595540 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.595555 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.595575 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.595587 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:25Z","lastTransitionTime":"2025-11-25T11:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.699853 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.699915 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.699926 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.699947 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.699963 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:25Z","lastTransitionTime":"2025-11-25T11:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.803128 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.803171 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.803183 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.803200 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.803211 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:25Z","lastTransitionTime":"2025-11-25T11:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.905625 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.905667 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.905676 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.905693 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.905710 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:25Z","lastTransitionTime":"2025-11-25T11:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.921987 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.922085 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:37:25 crc kubenswrapper[4706]: E1125 11:37:25.922162 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:37:25 crc kubenswrapper[4706]: E1125 11:37:25.922253 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.922375 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:37:25 crc kubenswrapper[4706]: E1125 11:37:25.922439 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:37:25 crc kubenswrapper[4706]: I1125 11:37:25.922502 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:37:25 crc kubenswrapper[4706]: E1125 11:37:25.922575 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.009192 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.009233 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.009246 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.009272 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.009289 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:26Z","lastTransitionTime":"2025-11-25T11:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.112162 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.112208 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.112220 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.112239 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.112250 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:26Z","lastTransitionTime":"2025-11-25T11:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.215085 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.215138 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.215148 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.215166 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.215177 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:26Z","lastTransitionTime":"2025-11-25T11:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.317104 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.317159 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.317169 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.317187 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.317210 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:26Z","lastTransitionTime":"2025-11-25T11:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.420170 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.420246 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.420291 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.420344 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.420369 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:26Z","lastTransitionTime":"2025-11-25T11:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.523277 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.523399 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.523427 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.523460 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.523478 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:26Z","lastTransitionTime":"2025-11-25T11:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.626446 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.626501 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.626513 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.626534 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.626546 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:26Z","lastTransitionTime":"2025-11-25T11:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.729229 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.729274 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.729286 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.729327 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.729340 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:26Z","lastTransitionTime":"2025-11-25T11:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.832547 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.832632 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.832650 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.832678 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.832700 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:26Z","lastTransitionTime":"2025-11-25T11:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.935658 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.935969 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.936100 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.936229 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:26 crc kubenswrapper[4706]: I1125 11:37:26.936402 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:26Z","lastTransitionTime":"2025-11-25T11:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.038355 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.038638 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.038823 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.038978 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.039127 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:27Z","lastTransitionTime":"2025-11-25T11:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.142483 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.142845 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.142932 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.143010 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.143096 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:27Z","lastTransitionTime":"2025-11-25T11:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.245341 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.245399 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.245414 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.245437 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.245454 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:27Z","lastTransitionTime":"2025-11-25T11:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.348126 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.348453 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.348543 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.348695 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.348787 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:27Z","lastTransitionTime":"2025-11-25T11:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.450872 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.450917 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.450925 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.450945 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.450963 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:27Z","lastTransitionTime":"2025-11-25T11:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.553836 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.553882 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.553894 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.553912 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.553924 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:27Z","lastTransitionTime":"2025-11-25T11:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.656626 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.656721 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.656738 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.656762 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.656776 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:27Z","lastTransitionTime":"2025-11-25T11:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.759937 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.760706 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.760797 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.760948 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.761085 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:27Z","lastTransitionTime":"2025-11-25T11:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.863205 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.863266 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.863279 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.863336 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.863352 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:27Z","lastTransitionTime":"2025-11-25T11:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.922012 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.922038 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:27 crc kubenswrapper[4706]: E1125 11:37:27.922211 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.922450 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:37:27 crc kubenswrapper[4706]: E1125 11:37:27.922539 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.922611 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:37:27 crc kubenswrapper[4706]: E1125 11:37:27.922652 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:37:27 crc kubenswrapper[4706]: E1125 11:37:27.922830 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.966290 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.966436 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.966448 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.966468 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:27 crc kubenswrapper[4706]: I1125 11:37:27.966481 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:27Z","lastTransitionTime":"2025-11-25T11:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.069722 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.069773 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.069793 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.069815 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.069834 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:28Z","lastTransitionTime":"2025-11-25T11:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.172218 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.172267 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.172278 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.172314 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.172325 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:28Z","lastTransitionTime":"2025-11-25T11:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.275478 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.275774 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.275897 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.275976 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.276050 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:28Z","lastTransitionTime":"2025-11-25T11:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.378253 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.378330 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.378347 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.378370 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.378387 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:28Z","lastTransitionTime":"2025-11-25T11:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.386661 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.386720 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.386733 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.386754 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.386767 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:28Z","lastTransitionTime":"2025-11-25T11:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:28 crc kubenswrapper[4706]: E1125 11:37:28.418704 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:28Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.423854 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.423927 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.423952 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.423980 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.424004 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:28Z","lastTransitionTime":"2025-11-25T11:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:28 crc kubenswrapper[4706]: E1125 11:37:28.461515 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:28Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.466954 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.467034 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.467044 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.467064 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.467076 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:28Z","lastTransitionTime":"2025-11-25T11:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:28 crc kubenswrapper[4706]: E1125 11:37:28.481511 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:28Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.485801 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.485858 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.485869 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.485889 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.485903 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:28Z","lastTransitionTime":"2025-11-25T11:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:28 crc kubenswrapper[4706]: E1125 11:37:28.500377 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:28Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.504660 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.504698 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.504709 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.504727 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.504739 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:28Z","lastTransitionTime":"2025-11-25T11:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:28 crc kubenswrapper[4706]: E1125 11:37:28.518850 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:28Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:28 crc kubenswrapper[4706]: E1125 11:37:28.518974 4706 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.520851 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.520895 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.520908 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.520936 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.520949 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:28Z","lastTransitionTime":"2025-11-25T11:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.623682 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.623935 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.624061 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.624166 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.624261 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:28Z","lastTransitionTime":"2025-11-25T11:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.727786 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.727835 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.727847 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.727866 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.727877 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:28Z","lastTransitionTime":"2025-11-25T11:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.831784 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.831844 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.831856 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.831877 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.831889 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:28Z","lastTransitionTime":"2025-11-25T11:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.934501 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.934555 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.934567 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.934589 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:28 crc kubenswrapper[4706]: I1125 11:37:28.934604 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:28Z","lastTransitionTime":"2025-11-25T11:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.037381 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.037438 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.037448 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.037470 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.037480 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:29Z","lastTransitionTime":"2025-11-25T11:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.140509 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.140552 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.140561 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.140578 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.140591 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:29Z","lastTransitionTime":"2025-11-25T11:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.243487 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.243546 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.243559 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.243581 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.243596 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:29Z","lastTransitionTime":"2025-11-25T11:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.347369 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.347727 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.347817 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.347922 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.348013 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:29Z","lastTransitionTime":"2025-11-25T11:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.451222 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.451263 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.451274 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.451293 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.451321 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:29Z","lastTransitionTime":"2025-11-25T11:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.554473 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.554522 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.554531 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.554548 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.554561 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:29Z","lastTransitionTime":"2025-11-25T11:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.657008 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.657050 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.657064 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.657082 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.657093 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:29Z","lastTransitionTime":"2025-11-25T11:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.759753 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.759796 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.759814 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.759836 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.759850 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:29Z","lastTransitionTime":"2025-11-25T11:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.862156 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.862509 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.862546 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.862570 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.862583 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:29Z","lastTransitionTime":"2025-11-25T11:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.922097 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.922154 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.922108 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.922108 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:37:29 crc kubenswrapper[4706]: E1125 11:37:29.922419 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:37:29 crc kubenswrapper[4706]: E1125 11:37:29.922434 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:37:29 crc kubenswrapper[4706]: E1125 11:37:29.922641 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:37:29 crc kubenswrapper[4706]: E1125 11:37:29.922747 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.965760 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.965820 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.965829 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.965847 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:29 crc kubenswrapper[4706]: I1125 11:37:29.965860 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:29Z","lastTransitionTime":"2025-11-25T11:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.068765 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.069095 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.069179 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.069280 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.069400 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:30Z","lastTransitionTime":"2025-11-25T11:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.172689 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.172766 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.172780 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.172804 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.172819 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:30Z","lastTransitionTime":"2025-11-25T11:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.275783 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.275826 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.275836 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.275853 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.275863 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:30Z","lastTransitionTime":"2025-11-25T11:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.378627 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.378889 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.378956 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.379035 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.379110 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:30Z","lastTransitionTime":"2025-11-25T11:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.481443 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.481489 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.481500 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.481519 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.481532 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:30Z","lastTransitionTime":"2025-11-25T11:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.584166 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.584221 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.584233 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.584257 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.584271 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:30Z","lastTransitionTime":"2025-11-25T11:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.687462 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.687506 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.687518 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.687537 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.687547 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:30Z","lastTransitionTime":"2025-11-25T11:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.791153 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.791200 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.791215 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.791234 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.791243 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:30Z","lastTransitionTime":"2025-11-25T11:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.893393 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.893451 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.893465 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.893486 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.893502 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:30Z","lastTransitionTime":"2025-11-25T11:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.922709 4706 scope.go:117] "RemoveContainer" containerID="67aac9b1fc77bcf7bb71812ee95214930edbb62bf5efb82d5128c53fd392a346" Nov 25 11:37:30 crc kubenswrapper[4706]: E1125 11:37:30.922950 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-q9rpr_openshift-ovn-kubernetes(f1218bae-4153-4490-8847-ab2d07ca0ab6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.996344 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.996827 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.996927 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.997035 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:30 crc kubenswrapper[4706]: I1125 11:37:30.997138 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:30Z","lastTransitionTime":"2025-11-25T11:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.099829 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.099899 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.099913 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.099934 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.099949 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:31Z","lastTransitionTime":"2025-11-25T11:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.202414 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.202463 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.202475 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.202494 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.202506 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:31Z","lastTransitionTime":"2025-11-25T11:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.304973 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.305008 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.305016 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.305033 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.305044 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:31Z","lastTransitionTime":"2025-11-25T11:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.407332 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.407407 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.407426 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.407478 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.407496 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:31Z","lastTransitionTime":"2025-11-25T11:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.509621 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.509664 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.509676 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.509694 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.509704 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:31Z","lastTransitionTime":"2025-11-25T11:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.617084 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.617144 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.617155 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.617177 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.617189 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:31Z","lastTransitionTime":"2025-11-25T11:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.719872 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.719923 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.719933 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.719951 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.719964 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:31Z","lastTransitionTime":"2025-11-25T11:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.823032 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.823086 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.823099 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.823121 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.823137 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:31Z","lastTransitionTime":"2025-11-25T11:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.921599 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.921620 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.921678 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.921630 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:31 crc kubenswrapper[4706]: E1125 11:37:31.921759 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:37:31 crc kubenswrapper[4706]: E1125 11:37:31.921881 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:37:31 crc kubenswrapper[4706]: E1125 11:37:31.921966 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:37:31 crc kubenswrapper[4706]: E1125 11:37:31.922024 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.927279 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.927348 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.927361 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.927384 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.927401 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:31Z","lastTransitionTime":"2025-11-25T11:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.935292 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:31Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.948939 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:31Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.960156 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc09de93-57e8-4697-8ce8-70bfc1b693e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daff2070c60f609fd06be9589e3cd8d304d131f7b9669c7be4b8e9178df8f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39eec3aac772cc9463505277d6b3f7cf2eb7621e4add4f14e53110e3db8c4cdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qkkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:31Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.976397 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:31Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:31 crc kubenswrapper[4706]: I1125 11:37:31.989285 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ec2e656-a68d-4339-92d5-0c157f7f7783\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a1481dd8cb88b79d8addfbfd40caf18850769e4492c2af316105b7f6779f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w54mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:31Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.006949 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:32Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.024073 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:32Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.029808 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.029861 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.029873 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.029895 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.029910 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:32Z","lastTransitionTime":"2025-11-25T11:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.040369 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:32Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.057461 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:32Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.071240 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:32Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.084074 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:32Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.106494 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67aac9b1fc77bcf7bb71812ee95214930edbb62bf5efb82d5128c53fd392a346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67aac9b1fc77bcf7bb71812ee95214930edbb62bf5efb82d5128c53fd392a346\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:16Z\\\",\\\"message\\\":\\\"e Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 11:37:16.268126 6342 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1125 11:37:16.268101 6342 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-q9rpr_openshift-ovn-kubernetes(f1218bae-4153-4490-8847-ab2d07ca0ab6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:32Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.120605 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:32Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.132259 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.132320 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.132332 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.132351 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.132361 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:32Z","lastTransitionTime":"2025-11-25T11:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.140253 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:32Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.153727 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:32Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.169490 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de18c07bf8490d7495947e9a271e3e7273b9ffdcc43afd2a0468394af0ae0b0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:32Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.183841 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l99rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14d69237-a4b7-43ea-ac81-f165eb532669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l99rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:32Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.198059 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b156f76-9878-4527-95c5-27adfffbcd87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a8135a692a512f05f3a902977e8b7a505d8346fb6e96c26ffc58d075e902c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7224a1c52df964a792e6197a4f97313b139ffbd6d65820d93e36561e817ddc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78068d04cf52a463ca3595227c44918d360266c71afc97c1792e48b004bebe42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0299d89c1a2ea9c2a4bb46691aecd2d86618d3620e7406e1af57e1c03ce50b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0299d89c1a2ea9c2a4bb46691aecd2d86618d3620e7406e1af57e1c03ce50b94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:32Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.235181 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.235227 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.235237 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.235256 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.235266 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:32Z","lastTransitionTime":"2025-11-25T11:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.337614 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.337660 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.337669 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.337688 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.337700 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:32Z","lastTransitionTime":"2025-11-25T11:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.440097 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.440154 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.440170 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.440191 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.440206 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:32Z","lastTransitionTime":"2025-11-25T11:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.543274 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.543345 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.543357 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.543375 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.543387 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:32Z","lastTransitionTime":"2025-11-25T11:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.646479 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.646539 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.646548 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.646567 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.646583 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:32Z","lastTransitionTime":"2025-11-25T11:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.749990 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.750470 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.750486 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.750508 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.750520 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:32Z","lastTransitionTime":"2025-11-25T11:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.852932 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.852988 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.852997 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.853015 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.853026 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:32Z","lastTransitionTime":"2025-11-25T11:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.956609 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.956651 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.956661 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.956681 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:32 crc kubenswrapper[4706]: I1125 11:37:32.956690 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:32Z","lastTransitionTime":"2025-11-25T11:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.059140 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.059235 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.059248 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.059268 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.059280 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:33Z","lastTransitionTime":"2025-11-25T11:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.161796 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.161837 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.161849 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.161893 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.161910 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:33Z","lastTransitionTime":"2025-11-25T11:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.264071 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.264133 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.264147 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.264170 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.264187 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:33Z","lastTransitionTime":"2025-11-25T11:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.366734 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.366767 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.366792 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.366807 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.366819 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:33Z","lastTransitionTime":"2025-11-25T11:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.469959 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.470000 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.470011 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.470029 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.470042 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:33Z","lastTransitionTime":"2025-11-25T11:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.573387 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.573432 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.573457 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.573475 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.573487 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:33Z","lastTransitionTime":"2025-11-25T11:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.676428 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.676482 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.676492 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.676508 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.676518 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:33Z","lastTransitionTime":"2025-11-25T11:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.782375 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.782484 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.782498 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.782519 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.782533 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:33Z","lastTransitionTime":"2025-11-25T11:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.884945 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.885283 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.885544 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.885703 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.885884 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:33Z","lastTransitionTime":"2025-11-25T11:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.923205 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.923373 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:37:33 crc kubenswrapper[4706]: E1125 11:37:33.923518 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.923628 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.923705 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:33 crc kubenswrapper[4706]: E1125 11:37:33.923702 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:37:33 crc kubenswrapper[4706]: E1125 11:37:33.923809 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:37:33 crc kubenswrapper[4706]: E1125 11:37:33.923899 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.988162 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.988192 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.988200 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.988217 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:33 crc kubenswrapper[4706]: I1125 11:37:33.988227 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:33Z","lastTransitionTime":"2025-11-25T11:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.091529 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.091586 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.091601 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.091621 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.091635 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:34Z","lastTransitionTime":"2025-11-25T11:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.194911 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.194975 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.194987 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.195011 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.195027 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:34Z","lastTransitionTime":"2025-11-25T11:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.297505 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.297782 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.297916 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.298021 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.298110 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:34Z","lastTransitionTime":"2025-11-25T11:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.400522 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.401017 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.401158 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.401263 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.401397 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:34Z","lastTransitionTime":"2025-11-25T11:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.504344 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.504678 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.504750 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.504820 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.504878 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:34Z","lastTransitionTime":"2025-11-25T11:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.606884 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.606931 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.606946 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.606967 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.606981 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:34Z","lastTransitionTime":"2025-11-25T11:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.709475 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.709876 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.709999 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.710120 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.710218 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:34Z","lastTransitionTime":"2025-11-25T11:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.812914 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.813152 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.813211 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.813349 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.813451 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:34Z","lastTransitionTime":"2025-11-25T11:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.915853 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.915885 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.915894 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.915909 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:34 crc kubenswrapper[4706]: I1125 11:37:34.915919 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:34Z","lastTransitionTime":"2025-11-25T11:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.018295 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.018910 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.018998 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.019181 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.019285 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:35Z","lastTransitionTime":"2025-11-25T11:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.122613 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.122662 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.122672 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.122691 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.122703 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:35Z","lastTransitionTime":"2025-11-25T11:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.225450 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.225820 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.225997 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.226134 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.226220 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:35Z","lastTransitionTime":"2025-11-25T11:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.329417 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.329741 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.329813 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.329880 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.329946 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:35Z","lastTransitionTime":"2025-11-25T11:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.432374 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.432694 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.432784 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.432897 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.432982 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:35Z","lastTransitionTime":"2025-11-25T11:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.536119 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.536418 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.536526 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.536677 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.536769 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:35Z","lastTransitionTime":"2025-11-25T11:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.639182 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.639248 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.639259 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.639282 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.639294 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:35Z","lastTransitionTime":"2025-11-25T11:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.742167 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.742211 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.742222 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.742248 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.742262 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:35Z","lastTransitionTime":"2025-11-25T11:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.844793 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.844836 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.844845 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.844863 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.844876 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:35Z","lastTransitionTime":"2025-11-25T11:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.921175 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.921208 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.921325 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:37:35 crc kubenswrapper[4706]: E1125 11:37:35.921376 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.921594 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:37:35 crc kubenswrapper[4706]: E1125 11:37:35.921727 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:37:35 crc kubenswrapper[4706]: E1125 11:37:35.921949 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:37:35 crc kubenswrapper[4706]: E1125 11:37:35.922049 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.947834 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.947893 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.947905 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.947927 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:35 crc kubenswrapper[4706]: I1125 11:37:35.947940 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:35Z","lastTransitionTime":"2025-11-25T11:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.050526 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.050586 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.050597 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.050618 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.050633 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:36Z","lastTransitionTime":"2025-11-25T11:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.154449 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.154497 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.154509 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.154532 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.154549 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:36Z","lastTransitionTime":"2025-11-25T11:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.257518 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.257574 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.257587 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.257610 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.257624 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:36Z","lastTransitionTime":"2025-11-25T11:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.359980 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.360020 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.360037 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.360058 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.360069 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:36Z","lastTransitionTime":"2025-11-25T11:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.464156 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.464215 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.464228 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.464249 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.464263 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:36Z","lastTransitionTime":"2025-11-25T11:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.567127 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.567171 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.567181 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.567203 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.567214 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:36Z","lastTransitionTime":"2025-11-25T11:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.669726 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.669787 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.669803 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.669824 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.669834 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:36Z","lastTransitionTime":"2025-11-25T11:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.773094 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.773150 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.773166 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.773335 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.773392 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:36Z","lastTransitionTime":"2025-11-25T11:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.876096 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.876135 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.876149 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.876168 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.876181 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:36Z","lastTransitionTime":"2025-11-25T11:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.979104 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.979149 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.979160 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.979178 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:36 crc kubenswrapper[4706]: I1125 11:37:36.979192 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:36Z","lastTransitionTime":"2025-11-25T11:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.081608 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.081649 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.081658 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.081674 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.081683 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:37Z","lastTransitionTime":"2025-11-25T11:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.184259 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.184319 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.184328 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.184345 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.184356 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:37Z","lastTransitionTime":"2025-11-25T11:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.286576 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.286721 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.286734 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.286752 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.286765 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:37Z","lastTransitionTime":"2025-11-25T11:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.389607 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.389638 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.389648 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.389665 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.389675 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:37Z","lastTransitionTime":"2025-11-25T11:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.492526 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.492571 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.492589 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.492612 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.492628 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:37Z","lastTransitionTime":"2025-11-25T11:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.595690 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.595729 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.595738 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.595753 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.595765 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:37Z","lastTransitionTime":"2025-11-25T11:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.698116 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.698168 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.698181 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.698206 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.698224 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:37Z","lastTransitionTime":"2025-11-25T11:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.801312 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.801357 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.801368 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.801385 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.801395 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:37Z","lastTransitionTime":"2025-11-25T11:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.904450 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.904501 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.904514 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.904534 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.904546 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:37Z","lastTransitionTime":"2025-11-25T11:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.922025 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.922079 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.922099 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:37:37 crc kubenswrapper[4706]: E1125 11:37:37.922174 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:37:37 crc kubenswrapper[4706]: I1125 11:37:37.922292 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:37:37 crc kubenswrapper[4706]: E1125 11:37:37.922280 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:37:37 crc kubenswrapper[4706]: E1125 11:37:37.922373 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:37:37 crc kubenswrapper[4706]: E1125 11:37:37.922443 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.007260 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.007365 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.007377 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.007398 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.007409 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:38Z","lastTransitionTime":"2025-11-25T11:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.110483 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.110530 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.110542 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.110562 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.110574 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:38Z","lastTransitionTime":"2025-11-25T11:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.213219 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.213278 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.213291 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.213332 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.213349 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:38Z","lastTransitionTime":"2025-11-25T11:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.315664 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.315926 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.316009 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.316109 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.316195 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:38Z","lastTransitionTime":"2025-11-25T11:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.422479 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.422535 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.422549 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.422593 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.422607 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:38Z","lastTransitionTime":"2025-11-25T11:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.525474 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.525527 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.525541 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.525562 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.525578 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:38Z","lastTransitionTime":"2025-11-25T11:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.628146 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.628448 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.628733 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.628901 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.629079 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:38Z","lastTransitionTime":"2025-11-25T11:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.678257 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.678333 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.678344 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.678365 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.678378 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:38Z","lastTransitionTime":"2025-11-25T11:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:38 crc kubenswrapper[4706]: E1125 11:37:38.691765 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:38Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.695738 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.695874 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.695949 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.696025 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.696114 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:38Z","lastTransitionTime":"2025-11-25T11:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:38 crc kubenswrapper[4706]: E1125 11:37:38.709675 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:38Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.713522 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.713577 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.713591 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.713613 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.713627 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:38Z","lastTransitionTime":"2025-11-25T11:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:38 crc kubenswrapper[4706]: E1125 11:37:38.726141 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:38Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.729660 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.729819 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.729909 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.730011 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.730092 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:38Z","lastTransitionTime":"2025-11-25T11:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:38 crc kubenswrapper[4706]: E1125 11:37:38.743973 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:38Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.748661 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.748711 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.748722 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.748743 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.748755 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:38Z","lastTransitionTime":"2025-11-25T11:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:38 crc kubenswrapper[4706]: E1125 11:37:38.761804 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:38Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:38 crc kubenswrapper[4706]: E1125 11:37:38.762009 4706 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.763996 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.764045 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.764055 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.764072 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.764084 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:38Z","lastTransitionTime":"2025-11-25T11:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.867924 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.867987 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.868001 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.868022 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.868036 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:38Z","lastTransitionTime":"2025-11-25T11:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.969814 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.969851 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.969862 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.969881 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:38 crc kubenswrapper[4706]: I1125 11:37:38.969893 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:38Z","lastTransitionTime":"2025-11-25T11:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.072348 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.072397 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.072408 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.072425 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.072445 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:39Z","lastTransitionTime":"2025-11-25T11:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.175226 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.175289 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.175330 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.175347 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.175359 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:39Z","lastTransitionTime":"2025-11-25T11:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.277283 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.277359 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.277369 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.277387 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.277398 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:39Z","lastTransitionTime":"2025-11-25T11:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.380100 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.380453 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.380557 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.380652 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.380731 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:39Z","lastTransitionTime":"2025-11-25T11:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.483407 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.483464 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.483473 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.483490 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.483501 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:39Z","lastTransitionTime":"2025-11-25T11:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.586565 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.586613 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.586625 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.586647 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.586662 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:39Z","lastTransitionTime":"2025-11-25T11:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.632839 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/14d69237-a4b7-43ea-ac81-f165eb532669-metrics-certs\") pod \"network-metrics-daemon-l99rd\" (UID: \"14d69237-a4b7-43ea-ac81-f165eb532669\") " pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:39 crc kubenswrapper[4706]: E1125 11:37:39.633147 4706 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 11:37:39 crc kubenswrapper[4706]: E1125 11:37:39.633269 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14d69237-a4b7-43ea-ac81-f165eb532669-metrics-certs podName:14d69237-a4b7-43ea-ac81-f165eb532669 nodeName:}" failed. No retries permitted until 2025-11-25 11:38:11.633242033 +0000 UTC m=+100.547799584 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/14d69237-a4b7-43ea-ac81-f165eb532669-metrics-certs") pod "network-metrics-daemon-l99rd" (UID: "14d69237-a4b7-43ea-ac81-f165eb532669") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.689643 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.690135 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.690207 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.690278 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.690370 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:39Z","lastTransitionTime":"2025-11-25T11:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.793615 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.793679 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.793688 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.793705 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.793719 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:39Z","lastTransitionTime":"2025-11-25T11:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.896201 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.896512 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.896589 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.896674 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.896731 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:39Z","lastTransitionTime":"2025-11-25T11:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.922005 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:37:39 crc kubenswrapper[4706]: E1125 11:37:39.922234 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.922026 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:37:39 crc kubenswrapper[4706]: E1125 11:37:39.922366 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.922016 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.922034 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:39 crc kubenswrapper[4706]: E1125 11:37:39.922445 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:37:39 crc kubenswrapper[4706]: E1125 11:37:39.922561 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.999547 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:39 crc kubenswrapper[4706]: I1125 11:37:39.999928 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.000178 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.000405 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.000605 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:40Z","lastTransitionTime":"2025-11-25T11:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.103658 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.103705 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.103718 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.103738 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.103751 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:40Z","lastTransitionTime":"2025-11-25T11:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.206617 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.206673 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.206685 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.206707 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.206719 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:40Z","lastTransitionTime":"2025-11-25T11:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.309378 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.309683 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.309755 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.309836 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.309923 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:40Z","lastTransitionTime":"2025-11-25T11:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.413540 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.414027 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.414154 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.414256 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.414375 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:40Z","lastTransitionTime":"2025-11-25T11:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.518333 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.518388 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.518401 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.518422 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.518435 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:40Z","lastTransitionTime":"2025-11-25T11:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.622149 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.622207 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.622219 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.622285 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.622320 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:40Z","lastTransitionTime":"2025-11-25T11:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.726521 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.726569 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.726581 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.726598 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.726610 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:40Z","lastTransitionTime":"2025-11-25T11:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.829544 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.829605 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.829619 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.829640 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.829653 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:40Z","lastTransitionTime":"2025-11-25T11:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.932566 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.932611 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.932623 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.932642 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:40 crc kubenswrapper[4706]: I1125 11:37:40.932654 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:40Z","lastTransitionTime":"2025-11-25T11:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.035427 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.035471 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.035481 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.035502 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.035513 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:41Z","lastTransitionTime":"2025-11-25T11:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.138053 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.138098 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.138110 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.138130 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.138142 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:41Z","lastTransitionTime":"2025-11-25T11:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.240925 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.240967 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.240978 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.240996 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.241008 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:41Z","lastTransitionTime":"2025-11-25T11:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.343394 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.343436 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.343449 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.343466 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.343478 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:41Z","lastTransitionTime":"2025-11-25T11:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.446109 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.446149 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.446161 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.446179 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.446194 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:41Z","lastTransitionTime":"2025-11-25T11:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.549091 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.549128 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.549149 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.549173 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.549189 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:41Z","lastTransitionTime":"2025-11-25T11:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.651964 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.652040 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.652049 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.652068 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.652080 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:41Z","lastTransitionTime":"2025-11-25T11:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.755004 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.755048 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.755057 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.755077 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.755130 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:41Z","lastTransitionTime":"2025-11-25T11:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.858204 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.858291 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.858330 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.858356 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.858373 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:41Z","lastTransitionTime":"2025-11-25T11:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.921255 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.921319 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.921319 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:37:41 crc kubenswrapper[4706]: E1125 11:37:41.921467 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:37:41 crc kubenswrapper[4706]: E1125 11:37:41.921585 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:37:41 crc kubenswrapper[4706]: E1125 11:37:41.921740 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.922020 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:41 crc kubenswrapper[4706]: E1125 11:37:41.922760 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.934102 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:41Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.949844 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ec2e656-a68d-4339-92d5-0c157f7f7783\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a1481dd8cb88b79d8addfbfd40caf18850769e4492c2af316105b7f6779f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w54mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:41Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.960390 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.960431 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.960443 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.960464 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.960476 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:41Z","lastTransitionTime":"2025-11-25T11:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.965810 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:41Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:41 crc kubenswrapper[4706]: I1125 11:37:41.981082 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:41Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.008097 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67aac9b1fc77bcf7bb71812ee95214930edbb62bf5efb82d5128c53fd392a346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67aac9b1fc77bcf7bb71812ee95214930edbb62bf5efb82d5128c53fd392a346\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:16Z\\\",\\\"message\\\":\\\"e Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 11:37:16.268126 6342 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1125 11:37:16.268101 6342 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-q9rpr_openshift-ovn-kubernetes(f1218bae-4153-4490-8847-ab2d07ca0ab6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:42Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.028922 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:42Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.045736 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:42Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.060558 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:42Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.062721 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.062761 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.062774 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.062793 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.062804 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:42Z","lastTransitionTime":"2025-11-25T11:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.075051 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:42Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.086923 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b156f76-9878-4527-95c5-27adfffbcd87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a8135a692a512f05f3a902977e8b7a505d8346fb6e96c26ffc58d075e902c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7224a1c52df964a792e6197a4f97313b139ffbd6d65820d93e36561e817ddc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78068d04cf52a463ca3595227c44918d360266c71afc97c1792e48b004bebe42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0299d89c1a2ea9c2a4bb46691aecd2d86618d3620e7406e1af57e1c03ce50b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0299d89c1a2ea9c2a4bb46691aecd2d86618d3620e7406e1af57e1c03ce50b94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:42Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.109469 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:42Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.125287 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:42Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.139690 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de18c07bf8490d7495947e9a271e3e7273b9ffdcc43afd2a0468394af0ae0b0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:42Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.154150 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l99rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14d69237-a4b7-43ea-ac81-f165eb532669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l99rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:42Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.165318 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.165366 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.165380 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.165400 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.165413 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:42Z","lastTransitionTime":"2025-11-25T11:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.170791 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:42Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.184873 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:42Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.198621 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:42Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.212633 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc09de93-57e8-4697-8ce8-70bfc1b693e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daff2070c60f609fd06be9589e3cd8d304d131f7b9669c7be4b8e9178df8f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39eec3aac772cc9463505277d6b3f7cf2eb7621e4add4f14e53110e3db8c4cdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qkkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:42Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.268025 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.268072 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.268083 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.268101 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.268115 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:42Z","lastTransitionTime":"2025-11-25T11:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.370104 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.370158 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.370166 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.370181 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.370206 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:42Z","lastTransitionTime":"2025-11-25T11:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.472773 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.472830 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.472839 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.472859 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.472873 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:42Z","lastTransitionTime":"2025-11-25T11:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.575396 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.575452 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.575463 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.575479 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.575488 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:42Z","lastTransitionTime":"2025-11-25T11:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.677721 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.677763 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.677771 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.677791 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.677804 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:42Z","lastTransitionTime":"2025-11-25T11:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.779954 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.780035 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.780047 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.780064 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.780075 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:42Z","lastTransitionTime":"2025-11-25T11:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.883252 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.883324 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.883336 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.883356 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.883369 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:42Z","lastTransitionTime":"2025-11-25T11:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.986279 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.986362 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.986376 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.986396 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:42 crc kubenswrapper[4706]: I1125 11:37:42.986410 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:42Z","lastTransitionTime":"2025-11-25T11:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.089909 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.089962 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.089974 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.089992 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.090006 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:43Z","lastTransitionTime":"2025-11-25T11:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.193741 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.193794 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.193803 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.193828 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.193848 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:43Z","lastTransitionTime":"2025-11-25T11:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.287911 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s47nr_9912058e-28f5-4cec-9eeb-03e37e0dc5c1/kube-multus/0.log" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.287974 4706 generic.go:334] "Generic (PLEG): container finished" podID="9912058e-28f5-4cec-9eeb-03e37e0dc5c1" containerID="d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4" exitCode=1 Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.288015 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-s47nr" event={"ID":"9912058e-28f5-4cec-9eeb-03e37e0dc5c1","Type":"ContainerDied","Data":"d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4"} Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.288553 4706 scope.go:117] "RemoveContainer" containerID="d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.297567 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.297640 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.297655 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.297679 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.297701 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:43Z","lastTransitionTime":"2025-11-25T11:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.305469 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:43Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.323022 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:43Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.336126 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:43Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.351780 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc09de93-57e8-4697-8ce8-70bfc1b693e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daff2070c60f609fd06be9589e3cd8d304d131f7b9669c7be4b8e9178df8f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39eec3aac772cc9463505277d6b3f7cf2eb7621e4add4f14e53110e3db8c4cdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qkkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:43Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.364036 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:43Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.373635 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ec2e656-a68d-4339-92d5-0c157f7f7783\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a1481dd8cb88b79d8addfbfd40caf18850769e4492c2af316105b7f6779f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w54mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:43Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.388228 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:43Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.400361 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.400402 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.400411 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.400429 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.400476 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:43Z","lastTransitionTime":"2025-11-25T11:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.403893 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:43Z\\\",\\\"message\\\":\\\"2025-11-25T11:36:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_64de4bb2-4e36-445e-91b1-9f500f3480d1\\\\n2025-11-25T11:36:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_64de4bb2-4e36-445e-91b1-9f500f3480d1 to /host/opt/cni/bin/\\\\n2025-11-25T11:36:58Z [verbose] multus-daemon started\\\\n2025-11-25T11:36:58Z [verbose] Readiness Indicator file check\\\\n2025-11-25T11:37:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:43Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.428622 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67aac9b1fc77bcf7bb71812ee95214930edbb62bf5efb82d5128c53fd392a346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67aac9b1fc77bcf7bb71812ee95214930edbb62bf5efb82d5128c53fd392a346\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:16Z\\\",\\\"message\\\":\\\"e Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 11:37:16.268126 6342 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1125 11:37:16.268101 6342 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-q9rpr_openshift-ovn-kubernetes(f1218bae-4153-4490-8847-ab2d07ca0ab6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:43Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.442969 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:43Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.458094 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:43Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.473189 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:43Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.487339 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:43Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.501741 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b156f76-9878-4527-95c5-27adfffbcd87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a8135a692a512f05f3a902977e8b7a505d8346fb6e96c26ffc58d075e902c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7224a1c52df964a792e6197a4f97313b139ffbd6d65820d93e36561e817ddc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78068d04cf52a463ca3595227c44918d360266c71afc97c1792e48b004bebe42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0299d89c1a2ea9c2a4bb46691aecd2d86618d3620e7406e1af57e1c03ce50b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0299d89c1a2ea9c2a4bb46691aecd2d86618d3620e7406e1af57e1c03ce50b94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:43Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.503662 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.503706 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.503718 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.503760 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.503773 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:43Z","lastTransitionTime":"2025-11-25T11:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.524119 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:43Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.539509 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:43Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.558981 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de18c07bf8490d7495947e9a271e3e7273b9ffdcc43afd2a0468394af0ae0b0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:43Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.575049 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l99rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14d69237-a4b7-43ea-ac81-f165eb532669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l99rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:43Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.606934 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.606988 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.607000 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.607020 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.607034 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:43Z","lastTransitionTime":"2025-11-25T11:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.710568 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.710633 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.710650 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.710674 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.710692 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:43Z","lastTransitionTime":"2025-11-25T11:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.813975 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.814034 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.814046 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.814066 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.814084 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:43Z","lastTransitionTime":"2025-11-25T11:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.917529 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.917583 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.917596 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.917614 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.917628 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:43Z","lastTransitionTime":"2025-11-25T11:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.922174 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.922221 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.922219 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:37:43 crc kubenswrapper[4706]: I1125 11:37:43.922197 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:43 crc kubenswrapper[4706]: E1125 11:37:43.922424 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:37:43 crc kubenswrapper[4706]: E1125 11:37:43.922512 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:37:43 crc kubenswrapper[4706]: E1125 11:37:43.922617 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:37:43 crc kubenswrapper[4706]: E1125 11:37:43.922724 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.019951 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.020012 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.020024 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.020044 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.020061 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:44Z","lastTransitionTime":"2025-11-25T11:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.122703 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.122773 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.122786 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.122804 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.122819 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:44Z","lastTransitionTime":"2025-11-25T11:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.225245 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.225325 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.225341 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.225362 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.225378 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:44Z","lastTransitionTime":"2025-11-25T11:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.293557 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s47nr_9912058e-28f5-4cec-9eeb-03e37e0dc5c1/kube-multus/0.log" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.293621 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-s47nr" event={"ID":"9912058e-28f5-4cec-9eeb-03e37e0dc5c1","Type":"ContainerStarted","Data":"8831e77983548cfffd56f81ff9f25b90d70dfb71b47b545af370b0a813fa19a9"} Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.312228 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:44Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.327338 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.327390 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.327405 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.327427 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.327440 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:44Z","lastTransitionTime":"2025-11-25T11:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.327758 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:44Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.340559 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:44Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.357134 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc09de93-57e8-4697-8ce8-70bfc1b693e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daff2070c60f609fd06be9589e3cd8d304d131f7b9669c7be4b8e9178df8f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39eec3aac772cc9463505277d6b3f7cf2eb7621e4add4f14e53110e3db8c4cdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qkkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:44Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.375341 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:44Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.387431 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ec2e656-a68d-4339-92d5-0c157f7f7783\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a1481dd8cb88b79d8addfbfd40caf18850769e4492c2af316105b7f6779f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w54mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:44Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.403464 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:44Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.419807 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:44Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.430967 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.431022 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.431039 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.431065 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.431081 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:44Z","lastTransitionTime":"2025-11-25T11:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.436996 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:44Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.451335 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:44Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.467814 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:44Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.483604 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8831e77983548cfffd56f81ff9f25b90d70dfb71b47b545af370b0a813fa19a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:43Z\\\",\\\"message\\\":\\\"2025-11-25T11:36:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_64de4bb2-4e36-445e-91b1-9f500f3480d1\\\\n2025-11-25T11:36:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_64de4bb2-4e36-445e-91b1-9f500f3480d1 to /host/opt/cni/bin/\\\\n2025-11-25T11:36:58Z [verbose] multus-daemon started\\\\n2025-11-25T11:36:58Z [verbose] Readiness Indicator file check\\\\n2025-11-25T11:37:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:44Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.506568 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67aac9b1fc77bcf7bb71812ee95214930edbb62bf5efb82d5128c53fd392a346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67aac9b1fc77bcf7bb71812ee95214930edbb62bf5efb82d5128c53fd392a346\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:16Z\\\",\\\"message\\\":\\\"e Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 11:37:16.268126 6342 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1125 11:37:16.268101 6342 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-q9rpr_openshift-ovn-kubernetes(f1218bae-4153-4490-8847-ab2d07ca0ab6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:44Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.522635 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b156f76-9878-4527-95c5-27adfffbcd87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a8135a692a512f05f3a902977e8b7a505d8346fb6e96c26ffc58d075e902c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7224a1c52df964a792e6197a4f97313b139ffbd6d65820d93e36561e817ddc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78068d04cf52a463ca3595227c44918d360266c71afc97c1792e48b004bebe42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0299d89c1a2ea9c2a4bb46691aecd2d86618d3620e7406e1af57e1c03ce50b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0299d89c1a2ea9c2a4bb46691aecd2d86618d3620e7406e1af57e1c03ce50b94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:44Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.534177 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.534211 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.534221 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.534235 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.534245 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:44Z","lastTransitionTime":"2025-11-25T11:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.545111 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:44Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.561615 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:44Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.578438 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de18c07bf8490d7495947e9a271e3e7273b9ffdcc43afd2a0468394af0ae0b0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:44Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.593372 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l99rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14d69237-a4b7-43ea-ac81-f165eb532669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l99rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:44Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.636786 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.636819 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.636827 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.636841 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.636851 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:44Z","lastTransitionTime":"2025-11-25T11:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.740970 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.741033 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.741044 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.741065 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.741079 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:44Z","lastTransitionTime":"2025-11-25T11:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.843517 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.843552 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.843561 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.843577 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.843587 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:44Z","lastTransitionTime":"2025-11-25T11:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.946538 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.946588 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.946609 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.946635 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:44 crc kubenswrapper[4706]: I1125 11:37:44.946649 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:44Z","lastTransitionTime":"2025-11-25T11:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.049685 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.049768 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.049784 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.049810 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.049821 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:45Z","lastTransitionTime":"2025-11-25T11:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.153280 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.153368 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.153383 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.153402 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.153413 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:45Z","lastTransitionTime":"2025-11-25T11:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.255473 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.255509 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.255519 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.255534 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.255544 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:45Z","lastTransitionTime":"2025-11-25T11:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.358487 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.359096 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.359177 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.359333 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.359423 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:45Z","lastTransitionTime":"2025-11-25T11:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.463136 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.463182 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.463193 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.463210 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.463221 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:45Z","lastTransitionTime":"2025-11-25T11:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.566689 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.566757 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.566774 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.566798 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.566813 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:45Z","lastTransitionTime":"2025-11-25T11:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.670111 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.670429 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.670569 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.670669 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.670756 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:45Z","lastTransitionTime":"2025-11-25T11:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.773355 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.773407 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.773418 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.773436 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.773448 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:45Z","lastTransitionTime":"2025-11-25T11:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.876031 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.876493 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.876577 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.876651 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.876725 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:45Z","lastTransitionTime":"2025-11-25T11:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.922024 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.922051 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.922107 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.922126 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:37:45 crc kubenswrapper[4706]: E1125 11:37:45.922197 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:37:45 crc kubenswrapper[4706]: E1125 11:37:45.922378 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:37:45 crc kubenswrapper[4706]: E1125 11:37:45.922487 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:37:45 crc kubenswrapper[4706]: E1125 11:37:45.922561 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.927730 4706 scope.go:117] "RemoveContainer" containerID="67aac9b1fc77bcf7bb71812ee95214930edbb62bf5efb82d5128c53fd392a346" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.979879 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.980143 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.980340 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.980475 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:45 crc kubenswrapper[4706]: I1125 11:37:45.980709 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:45Z","lastTransitionTime":"2025-11-25T11:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.083897 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.083932 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.083946 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.083973 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.083990 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:46Z","lastTransitionTime":"2025-11-25T11:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.186726 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.186774 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.186783 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.186798 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.186809 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:46Z","lastTransitionTime":"2025-11-25T11:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.297614 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.297678 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.297690 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.297708 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.297722 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:46Z","lastTransitionTime":"2025-11-25T11:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.310368 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9rpr_f1218bae-4153-4490-8847-ab2d07ca0ab6/ovnkube-controller/2.log" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.313161 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" event={"ID":"f1218bae-4153-4490-8847-ab2d07ca0ab6","Type":"ContainerStarted","Data":"a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5"} Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.313667 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.333230 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:46Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.355751 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:46Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.371773 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:46Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.395533 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:46Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.400452 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.400513 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.400526 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.400544 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.400576 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:46Z","lastTransitionTime":"2025-11-25T11:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.419326 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8831e77983548cfffd56f81ff9f25b90d70dfb71b47b545af370b0a813fa19a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:43Z\\\",\\\"message\\\":\\\"2025-11-25T11:36:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_64de4bb2-4e36-445e-91b1-9f500f3480d1\\\\n2025-11-25T11:36:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_64de4bb2-4e36-445e-91b1-9f500f3480d1 to /host/opt/cni/bin/\\\\n2025-11-25T11:36:58Z [verbose] multus-daemon started\\\\n2025-11-25T11:36:58Z [verbose] Readiness Indicator file check\\\\n2025-11-25T11:37:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:46Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.441012 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67aac9b1fc77bcf7bb71812ee95214930edbb62bf5efb82d5128c53fd392a346\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:16Z\\\",\\\"message\\\":\\\"e Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 11:37:16.268126 6342 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1125 11:37:16.268101 6342 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:46Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.461506 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:46Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.478115 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:46Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.496237 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de18c07bf8490d7495947e9a271e3e7273b9ffdcc43afd2a0468394af0ae0b0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:46Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.503987 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.504064 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.504141 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.504164 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.504175 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:46Z","lastTransitionTime":"2025-11-25T11:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.513838 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l99rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14d69237-a4b7-43ea-ac81-f165eb532669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l99rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:46Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.530204 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b156f76-9878-4527-95c5-27adfffbcd87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a8135a692a512f05f3a902977e8b7a505d8346fb6e96c26ffc58d075e902c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7224a1c52df964a792e6197a4f97313b139ffbd6d65820d93e36561e817ddc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78068d04cf52a463ca3595227c44918d360266c71afc97c1792e48b004bebe42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0299d89c1a2ea9c2a4bb46691aecd2d86618d3620e7406e1af57e1c03ce50b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0299d89c1a2ea9c2a4bb46691aecd2d86618d3620e7406e1af57e1c03ce50b94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:46Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.554340 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:46Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.568923 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:46Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.583346 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc09de93-57e8-4697-8ce8-70bfc1b693e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daff2070c60f609fd06be9589e3cd8d304d131f7b9669c7be4b8e9178df8f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39eec3aac772cc9463505277d6b3f7cf2eb7621e4add4f14e53110e3db8c4cdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qkkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:46Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.598683 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:46Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.607620 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.607693 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.607707 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.607732 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.607744 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:46Z","lastTransitionTime":"2025-11-25T11:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.617545 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:46Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.631032 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:46Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.643930 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ec2e656-a68d-4339-92d5-0c157f7f7783\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a1481dd8cb88b79d8addfbfd40caf18850769e4492c2af316105b7f6779f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w54mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:46Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.710806 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.710887 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.710898 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.710935 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.710953 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:46Z","lastTransitionTime":"2025-11-25T11:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.814600 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.814654 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.814667 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.814685 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.814698 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:46Z","lastTransitionTime":"2025-11-25T11:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.924019 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.924123 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.924137 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.924154 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:46 crc kubenswrapper[4706]: I1125 11:37:46.924167 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:46Z","lastTransitionTime":"2025-11-25T11:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.027201 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.027252 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.027262 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.027282 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.027293 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:47Z","lastTransitionTime":"2025-11-25T11:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.130518 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.130578 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.130589 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.130607 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.130619 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:47Z","lastTransitionTime":"2025-11-25T11:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.233672 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.233729 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.233740 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.233763 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.233777 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:47Z","lastTransitionTime":"2025-11-25T11:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.319021 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9rpr_f1218bae-4153-4490-8847-ab2d07ca0ab6/ovnkube-controller/3.log" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.319542 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9rpr_f1218bae-4153-4490-8847-ab2d07ca0ab6/ovnkube-controller/2.log" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.321938 4706 generic.go:334] "Generic (PLEG): container finished" podID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerID="a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5" exitCode=1 Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.321997 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" event={"ID":"f1218bae-4153-4490-8847-ab2d07ca0ab6","Type":"ContainerDied","Data":"a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5"} Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.322059 4706 scope.go:117] "RemoveContainer" containerID="67aac9b1fc77bcf7bb71812ee95214930edbb62bf5efb82d5128c53fd392a346" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.322621 4706 scope.go:117] "RemoveContainer" containerID="a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5" Nov 25 11:37:47 crc kubenswrapper[4706]: E1125 11:37:47.322810 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-q9rpr_openshift-ovn-kubernetes(f1218bae-4153-4490-8847-ab2d07ca0ab6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.336101 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.336144 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.336155 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.336176 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.336191 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:47Z","lastTransitionTime":"2025-11-25T11:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.338320 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:47Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.351401 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:47Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.364125 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:47Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.376747 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:47Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.388874 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8831e77983548cfffd56f81ff9f25b90d70dfb71b47b545af370b0a813fa19a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:43Z\\\",\\\"message\\\":\\\"2025-11-25T11:36:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_64de4bb2-4e36-445e-91b1-9f500f3480d1\\\\n2025-11-25T11:36:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_64de4bb2-4e36-445e-91b1-9f500f3480d1 to /host/opt/cni/bin/\\\\n2025-11-25T11:36:58Z [verbose] multus-daemon started\\\\n2025-11-25T11:36:58Z [verbose] Readiness Indicator file check\\\\n2025-11-25T11:37:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:47Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.408670 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67aac9b1fc77bcf7bb71812ee95214930edbb62bf5efb82d5128c53fd392a346\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:16Z\\\",\\\"message\\\":\\\"e Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 11:37:16.268126 6342 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1125 11:37:16.268101 6342 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:46Z\\\",\\\"message\\\":\\\"licy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.176],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1125 11:37:46.833085 6714 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler-operator/metrics]} name:Service_openshift-kube-scheduler-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.233:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1dc899db-4498-4b7a-8437-861940b962e7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1125 11:37:46.833121 6714 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:47Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.424159 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:47Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.438739 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.438781 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.438792 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.438811 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.438823 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:47Z","lastTransitionTime":"2025-11-25T11:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.446865 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:47Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.460253 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:47Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.472512 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de18c07bf8490d7495947e9a271e3e7273b9ffdcc43afd2a0468394af0ae0b0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:47Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.482976 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l99rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14d69237-a4b7-43ea-ac81-f165eb532669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l99rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:47Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.493958 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b156f76-9878-4527-95c5-27adfffbcd87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a8135a692a512f05f3a902977e8b7a505d8346fb6e96c26ffc58d075e902c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7224a1c52df964a792e6197a4f97313b139ffbd6d65820d93e36561e817ddc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78068d04cf52a463ca3595227c44918d360266c71afc97c1792e48b004bebe42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0299d89c1a2ea9c2a4bb46691aecd2d86618d3620e7406e1af57e1c03ce50b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0299d89c1a2ea9c2a4bb46691aecd2d86618d3620e7406e1af57e1c03ce50b94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:47Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.505601 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:47Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.515039 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:47Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.526083 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc09de93-57e8-4697-8ce8-70bfc1b693e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daff2070c60f609fd06be9589e3cd8d304d131f7b9669c7be4b8e9178df8f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39eec3aac772cc9463505277d6b3f7cf2eb7621e4add4f14e53110e3db8c4cdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qkkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:47Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.541103 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.541157 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.541166 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.541186 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.541198 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:47Z","lastTransitionTime":"2025-11-25T11:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.542078 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:47Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.553341 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ec2e656-a68d-4339-92d5-0c157f7f7783\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a1481dd8cb88b79d8addfbfd40caf18850769e4492c2af316105b7f6779f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w54mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:47Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.565903 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:47Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.643396 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.643433 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.643442 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.643457 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.643467 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:47Z","lastTransitionTime":"2025-11-25T11:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.747111 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.747160 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.747174 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.747192 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.747202 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:47Z","lastTransitionTime":"2025-11-25T11:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.850503 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.850551 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.850561 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.850582 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.850593 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:47Z","lastTransitionTime":"2025-11-25T11:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.921968 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.922032 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.921989 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.921989 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:37:47 crc kubenswrapper[4706]: E1125 11:37:47.922179 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:37:47 crc kubenswrapper[4706]: E1125 11:37:47.922288 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:37:47 crc kubenswrapper[4706]: E1125 11:37:47.922427 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:37:47 crc kubenswrapper[4706]: E1125 11:37:47.922508 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.952740 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.952782 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.952792 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.952810 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:47 crc kubenswrapper[4706]: I1125 11:37:47.952823 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:47Z","lastTransitionTime":"2025-11-25T11:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.055803 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.055864 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.055908 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.055934 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.055952 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:48Z","lastTransitionTime":"2025-11-25T11:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.158414 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.158455 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.158465 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.158485 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.158501 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:48Z","lastTransitionTime":"2025-11-25T11:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.261967 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.262023 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.262033 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.262057 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.262070 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:48Z","lastTransitionTime":"2025-11-25T11:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.326764 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9rpr_f1218bae-4153-4490-8847-ab2d07ca0ab6/ovnkube-controller/3.log" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.331041 4706 scope.go:117] "RemoveContainer" containerID="a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5" Nov 25 11:37:48 crc kubenswrapper[4706]: E1125 11:37:48.331465 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-q9rpr_openshift-ovn-kubernetes(f1218bae-4153-4490-8847-ab2d07ca0ab6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.347790 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:48Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.361894 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:48Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.365119 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.365165 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.365176 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.365195 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.365206 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:48Z","lastTransitionTime":"2025-11-25T11:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.376856 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:48Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.392100 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:48Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.405385 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8831e77983548cfffd56f81ff9f25b90d70dfb71b47b545af370b0a813fa19a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:43Z\\\",\\\"message\\\":\\\"2025-11-25T11:36:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_64de4bb2-4e36-445e-91b1-9f500f3480d1\\\\n2025-11-25T11:36:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_64de4bb2-4e36-445e-91b1-9f500f3480d1 to /host/opt/cni/bin/\\\\n2025-11-25T11:36:58Z [verbose] multus-daemon started\\\\n2025-11-25T11:36:58Z [verbose] Readiness Indicator file check\\\\n2025-11-25T11:37:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:48Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.423753 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:46Z\\\",\\\"message\\\":\\\"licy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.176],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1125 11:37:46.833085 6714 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler-operator/metrics]} name:Service_openshift-kube-scheduler-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.233:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1dc899db-4498-4b7a-8437-861940b962e7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1125 11:37:46.833121 6714 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-q9rpr_openshift-ovn-kubernetes(f1218bae-4153-4490-8847-ab2d07ca0ab6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:48Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.436962 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:48Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.454504 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:48Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.466450 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:48Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.467322 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.467354 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.467365 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.467384 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.467396 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:48Z","lastTransitionTime":"2025-11-25T11:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.482580 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de18c07bf8490d7495947e9a271e3e7273b9ffdcc43afd2a0468394af0ae0b0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:48Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.495662 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l99rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14d69237-a4b7-43ea-ac81-f165eb532669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l99rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:48Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.511796 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b156f76-9878-4527-95c5-27adfffbcd87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a8135a692a512f05f3a902977e8b7a505d8346fb6e96c26ffc58d075e902c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7224a1c52df964a792e6197a4f97313b139ffbd6d65820d93e36561e817ddc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78068d04cf52a463ca3595227c44918d360266c71afc97c1792e48b004bebe42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0299d89c1a2ea9c2a4bb46691aecd2d86618d3620e7406e1af57e1c03ce50b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0299d89c1a2ea9c2a4bb46691aecd2d86618d3620e7406e1af57e1c03ce50b94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:48Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.525764 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:48Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.539028 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:48Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.555694 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc09de93-57e8-4697-8ce8-70bfc1b693e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daff2070c60f609fd06be9589e3cd8d304d131f7b9669c7be4b8e9178df8f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39eec3aac772cc9463505277d6b3f7cf2eb7621e4add4f14e53110e3db8c4cdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qkkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:48Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.570965 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.571035 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.571048 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.571071 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.571102 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:48Z","lastTransitionTime":"2025-11-25T11:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.574215 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:48Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.586906 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ec2e656-a68d-4339-92d5-0c157f7f7783\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a1481dd8cb88b79d8addfbfd40caf18850769e4492c2af316105b7f6779f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w54mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:48Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.600971 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:48Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.673531 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.673574 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.673587 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.673607 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.673619 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:48Z","lastTransitionTime":"2025-11-25T11:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.775956 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.776010 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.776020 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.776036 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.776048 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:48Z","lastTransitionTime":"2025-11-25T11:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.791006 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.791061 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.791073 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.791095 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.791110 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:48Z","lastTransitionTime":"2025-11-25T11:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:48 crc kubenswrapper[4706]: E1125 11:37:48.807445 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:48Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.813474 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.813822 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.813938 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.814050 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.814113 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:48Z","lastTransitionTime":"2025-11-25T11:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:48 crc kubenswrapper[4706]: E1125 11:37:48.836808 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:48Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.842663 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.842950 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.843128 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.843366 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.843551 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:48Z","lastTransitionTime":"2025-11-25T11:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:48 crc kubenswrapper[4706]: E1125 11:37:48.858536 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:48Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.863693 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.863742 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.863754 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.863770 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.863806 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:48Z","lastTransitionTime":"2025-11-25T11:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:48 crc kubenswrapper[4706]: E1125 11:37:48.878057 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:48Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.882296 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.882508 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.882605 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.882690 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.882777 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:48Z","lastTransitionTime":"2025-11-25T11:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:48 crc kubenswrapper[4706]: E1125 11:37:48.897129 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:48Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:48 crc kubenswrapper[4706]: E1125 11:37:48.897257 4706 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.899283 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.899460 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.899519 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.899581 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:48 crc kubenswrapper[4706]: I1125 11:37:48.899637 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:48Z","lastTransitionTime":"2025-11-25T11:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.002678 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.002975 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.003103 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.003232 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.003381 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:49Z","lastTransitionTime":"2025-11-25T11:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.106614 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.106669 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.106683 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.106703 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.106714 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:49Z","lastTransitionTime":"2025-11-25T11:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.209335 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.209620 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.209745 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.209832 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.209909 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:49Z","lastTransitionTime":"2025-11-25T11:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.312263 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.312347 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.312364 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.312386 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.312400 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:49Z","lastTransitionTime":"2025-11-25T11:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.415466 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.415517 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.415530 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.415546 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.415557 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:49Z","lastTransitionTime":"2025-11-25T11:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.518571 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.518619 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.518631 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.518651 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.518666 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:49Z","lastTransitionTime":"2025-11-25T11:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.621578 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.621624 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.621644 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.621663 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.621673 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:49Z","lastTransitionTime":"2025-11-25T11:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.724739 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.724788 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.724799 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.724821 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.724834 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:49Z","lastTransitionTime":"2025-11-25T11:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.827707 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.827781 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.827796 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.827817 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.827831 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:49Z","lastTransitionTime":"2025-11-25T11:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.922285 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.922430 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.922469 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:49 crc kubenswrapper[4706]: E1125 11:37:49.922521 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.922643 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:37:49 crc kubenswrapper[4706]: E1125 11:37:49.922635 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:37:49 crc kubenswrapper[4706]: E1125 11:37:49.922776 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:37:49 crc kubenswrapper[4706]: E1125 11:37:49.923018 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.930280 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.930322 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.930332 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.930344 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.930354 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:49Z","lastTransitionTime":"2025-11-25T11:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:49 crc kubenswrapper[4706]: I1125 11:37:49.936220 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.033469 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.033504 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.033512 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.033527 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.033538 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:50Z","lastTransitionTime":"2025-11-25T11:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.136375 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.136417 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.136428 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.136445 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.136455 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:50Z","lastTransitionTime":"2025-11-25T11:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.239096 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.239581 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.239740 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.239875 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.239976 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:50Z","lastTransitionTime":"2025-11-25T11:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.343120 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.343155 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.343163 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.343178 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.343188 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:50Z","lastTransitionTime":"2025-11-25T11:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.446066 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.446122 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.446136 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.446153 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.446165 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:50Z","lastTransitionTime":"2025-11-25T11:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.549355 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.549408 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.549421 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.549443 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.549461 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:50Z","lastTransitionTime":"2025-11-25T11:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.652161 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.652231 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.652244 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.652265 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.652278 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:50Z","lastTransitionTime":"2025-11-25T11:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.756433 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.756490 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.756502 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.756522 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.756541 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:50Z","lastTransitionTime":"2025-11-25T11:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.863750 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.863823 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.863842 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.863865 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.863880 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:50Z","lastTransitionTime":"2025-11-25T11:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.967243 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.967328 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.967340 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.967359 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:50 crc kubenswrapper[4706]: I1125 11:37:50.967370 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:50Z","lastTransitionTime":"2025-11-25T11:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.070825 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.070888 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.070903 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.070923 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.070936 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:51Z","lastTransitionTime":"2025-11-25T11:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.173664 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.173728 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.173766 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.173807 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.173822 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:51Z","lastTransitionTime":"2025-11-25T11:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.277403 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.277469 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.277485 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.277505 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.277518 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:51Z","lastTransitionTime":"2025-11-25T11:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.380996 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.381032 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.381077 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.381099 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.381112 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:51Z","lastTransitionTime":"2025-11-25T11:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.483631 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.483729 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.483745 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.483769 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.483786 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:51Z","lastTransitionTime":"2025-11-25T11:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.586824 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.586883 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.586895 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.586915 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.586928 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:51Z","lastTransitionTime":"2025-11-25T11:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.689950 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.690069 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.690084 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.690108 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.690120 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:51Z","lastTransitionTime":"2025-11-25T11:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.793606 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.793672 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.793681 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.793703 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.793714 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:51Z","lastTransitionTime":"2025-11-25T11:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.896253 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.896319 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.896333 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.896350 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.896360 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:51Z","lastTransitionTime":"2025-11-25T11:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.921999 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.922068 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.922022 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.921999 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:37:51 crc kubenswrapper[4706]: E1125 11:37:51.922179 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:37:51 crc kubenswrapper[4706]: E1125 11:37:51.922232 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:37:51 crc kubenswrapper[4706]: E1125 11:37:51.922332 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:37:51 crc kubenswrapper[4706]: E1125 11:37:51.922408 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.936448 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:51Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.948198 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ec2e656-a68d-4339-92d5-0c157f7f7783\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a1481dd8cb88b79d8addfbfd40caf18850769e4492c2af316105b7f6779f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w54mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:51Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.963326 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:51Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.980190 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:51Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.995914 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:51Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:51 crc kubenswrapper[4706]: I1125 11:37:51.999835 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:51.999881 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:51.999892 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:51.999912 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:51.999922 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:51Z","lastTransitionTime":"2025-11-25T11:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.013001 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8831e77983548cfffd56f81ff9f25b90d70dfb71b47b545af370b0a813fa19a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:43Z\\\",\\\"message\\\":\\\"2025-11-25T11:36:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_64de4bb2-4e36-445e-91b1-9f500f3480d1\\\\n2025-11-25T11:36:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_64de4bb2-4e36-445e-91b1-9f500f3480d1 to /host/opt/cni/bin/\\\\n2025-11-25T11:36:58Z [verbose] multus-daemon started\\\\n2025-11-25T11:36:58Z [verbose] Readiness Indicator file check\\\\n2025-11-25T11:37:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:52Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.034011 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:46Z\\\",\\\"message\\\":\\\"licy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.176],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1125 11:37:46.833085 6714 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler-operator/metrics]} name:Service_openshift-kube-scheduler-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.233:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1dc899db-4498-4b7a-8437-861940b962e7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1125 11:37:46.833121 6714 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-q9rpr_openshift-ovn-kubernetes(f1218bae-4153-4490-8847-ab2d07ca0ab6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:52Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.046962 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:52Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.058456 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27ae65a2-2109-4ce8-a927-ad8b8cff1aae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f97c784f83c5f2d1cfce3f39f43a832fa8da73add257ae9c39f001bbfe3999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a03748c4ae77a0195537510fbf39f425fb59b820b719972a26c1cbaa4e1faa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a03748c4ae77a0195537510fbf39f425fb59b820b719972a26c1cbaa4e1faa0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:52Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.070397 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:52Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.084579 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de18c07bf8490d7495947e9a271e3e7273b9ffdcc43afd2a0468394af0ae0b0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:52Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.097284 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l99rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14d69237-a4b7-43ea-ac81-f165eb532669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l99rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:52Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.103043 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.103093 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.103105 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.103123 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.103137 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:52Z","lastTransitionTime":"2025-11-25T11:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.110567 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b156f76-9878-4527-95c5-27adfffbcd87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a8135a692a512f05f3a902977e8b7a505d8346fb6e96c26ffc58d075e902c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7224a1c52df964a792e6197a4f97313b139ffbd6d65820d93e36561e817ddc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78068d04cf52a463ca3595227c44918d360266c71afc97c1792e48b004bebe42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0299d89c1a2ea9c2a4bb46691aecd2d86618d3620e7406e1af57e1c03ce50b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0299d89c1a2ea9c2a4bb46691aecd2d86618d3620e7406e1af57e1c03ce50b94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:52Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.132257 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:52Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.144975 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:52Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.156843 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc09de93-57e8-4697-8ce8-70bfc1b693e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daff2070c60f609fd06be9589e3cd8d304d131f7b9669c7be4b8e9178df8f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39eec3aac772cc9463505277d6b3f7cf2eb7621e4add4f14e53110e3db8c4cdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qkkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:52Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.169961 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:52Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.185051 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:52Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.194046 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:52Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.205481 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.205676 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.205771 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.205887 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.205977 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:52Z","lastTransitionTime":"2025-11-25T11:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.308795 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.308849 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.308865 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.308886 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.308900 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:52Z","lastTransitionTime":"2025-11-25T11:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.412574 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.412654 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.412668 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.412693 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.412708 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:52Z","lastTransitionTime":"2025-11-25T11:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.515452 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.515990 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.516063 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.516141 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.516204 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:52Z","lastTransitionTime":"2025-11-25T11:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.619520 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.619874 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.619963 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.620054 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.620131 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:52Z","lastTransitionTime":"2025-11-25T11:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.723241 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.723343 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.723359 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.723384 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.723398 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:52Z","lastTransitionTime":"2025-11-25T11:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.825736 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.825773 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.825786 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.825803 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.825817 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:52Z","lastTransitionTime":"2025-11-25T11:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.928867 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.928915 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.928926 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.928944 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:52 crc kubenswrapper[4706]: I1125 11:37:52.928960 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:52Z","lastTransitionTime":"2025-11-25T11:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.031831 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.031895 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.031908 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.031932 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.031946 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:53Z","lastTransitionTime":"2025-11-25T11:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.134919 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.134977 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.134988 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.135009 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.135021 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:53Z","lastTransitionTime":"2025-11-25T11:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.238556 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.238800 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.238812 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.238832 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.238846 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:53Z","lastTransitionTime":"2025-11-25T11:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.341439 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.341489 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.341500 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.341519 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.341530 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:53Z","lastTransitionTime":"2025-11-25T11:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.443576 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.443616 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.443624 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.443639 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.443648 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:53Z","lastTransitionTime":"2025-11-25T11:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.546276 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.546449 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.546484 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.546507 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.546521 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:53Z","lastTransitionTime":"2025-11-25T11:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.649857 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.649900 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.649925 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.649941 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.649951 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:53Z","lastTransitionTime":"2025-11-25T11:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.752646 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.752686 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.752730 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.752750 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.752760 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:53Z","lastTransitionTime":"2025-11-25T11:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.854869 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.854915 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.854926 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.854946 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.854958 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:53Z","lastTransitionTime":"2025-11-25T11:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.921680 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.921784 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.921852 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:53 crc kubenswrapper[4706]: E1125 11:37:53.921863 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.921892 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:37:53 crc kubenswrapper[4706]: E1125 11:37:53.921943 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:37:53 crc kubenswrapper[4706]: E1125 11:37:53.922060 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:37:53 crc kubenswrapper[4706]: E1125 11:37:53.922114 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.957219 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.957266 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.957276 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.957318 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:53 crc kubenswrapper[4706]: I1125 11:37:53.957333 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:53Z","lastTransitionTime":"2025-11-25T11:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.060439 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.060502 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.060512 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.060530 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.060543 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:54Z","lastTransitionTime":"2025-11-25T11:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.163185 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.163232 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.163244 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.163264 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.163276 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:54Z","lastTransitionTime":"2025-11-25T11:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.266242 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.266663 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.266673 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.266694 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.266708 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:54Z","lastTransitionTime":"2025-11-25T11:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.368921 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.368968 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.368985 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.369004 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.369014 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:54Z","lastTransitionTime":"2025-11-25T11:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.471189 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.471233 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.471248 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.471267 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.471277 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:54Z","lastTransitionTime":"2025-11-25T11:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.573587 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.573634 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.573647 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.573666 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.573678 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:54Z","lastTransitionTime":"2025-11-25T11:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.676824 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.676878 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.676890 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.676910 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.676922 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:54Z","lastTransitionTime":"2025-11-25T11:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.783581 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.783644 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.783657 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.783678 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.783883 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:54Z","lastTransitionTime":"2025-11-25T11:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.886381 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.886435 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.886446 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.886464 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.886477 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:54Z","lastTransitionTime":"2025-11-25T11:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.989037 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.989078 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.989087 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.989106 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:54 crc kubenswrapper[4706]: I1125 11:37:54.989115 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:54Z","lastTransitionTime":"2025-11-25T11:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.092139 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.092192 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.092207 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.092226 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.092243 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:55Z","lastTransitionTime":"2025-11-25T11:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.194483 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.194532 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.194542 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.194561 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.194572 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:55Z","lastTransitionTime":"2025-11-25T11:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.297567 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.297619 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.297630 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.297647 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.297663 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:55Z","lastTransitionTime":"2025-11-25T11:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.399895 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.399934 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.399942 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.399958 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.399968 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:55Z","lastTransitionTime":"2025-11-25T11:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.503040 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.503095 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.503109 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.503132 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.503146 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:55Z","lastTransitionTime":"2025-11-25T11:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.606444 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.606523 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.606550 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.606596 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.606622 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:55Z","lastTransitionTime":"2025-11-25T11:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.709711 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.709769 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.709779 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.709797 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.709812 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:55Z","lastTransitionTime":"2025-11-25T11:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.812122 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.812176 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.812190 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.812211 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.812229 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:55Z","lastTransitionTime":"2025-11-25T11:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.818638 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.818710 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:37:55 crc kubenswrapper[4706]: E1125 11:37:55.818741 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:59.818720133 +0000 UTC m=+148.733277514 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.818788 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:37:55 crc kubenswrapper[4706]: E1125 11:37:55.818810 4706 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.818827 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:37:55 crc kubenswrapper[4706]: E1125 11:37:55.818855 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 11:38:59.818843407 +0000 UTC m=+148.733400788 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 11:37:55 crc kubenswrapper[4706]: E1125 11:37:55.818955 4706 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 11:37:55 crc kubenswrapper[4706]: E1125 11:37:55.818975 4706 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 11:37:55 crc kubenswrapper[4706]: E1125 11:37:55.819000 4706 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 11:37:55 crc kubenswrapper[4706]: E1125 11:37:55.819055 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 11:38:59.819046535 +0000 UTC m=+148.733603916 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 11:37:55 crc kubenswrapper[4706]: E1125 11:37:55.819087 4706 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 11:37:55 crc kubenswrapper[4706]: E1125 11:37:55.819254 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 11:38:59.819223161 +0000 UTC m=+148.733780702 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.915411 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.915453 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.915463 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.915480 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.915490 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:55Z","lastTransitionTime":"2025-11-25T11:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.919992 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:37:55 crc kubenswrapper[4706]: E1125 11:37:55.920152 4706 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 11:37:55 crc kubenswrapper[4706]: E1125 11:37:55.920181 4706 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 11:37:55 crc kubenswrapper[4706]: E1125 11:37:55.920193 4706 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 11:37:55 crc kubenswrapper[4706]: E1125 11:37:55.920254 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 11:38:59.920236573 +0000 UTC m=+148.834793954 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.922365 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.922383 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:37:55 crc kubenswrapper[4706]: E1125 11:37:55.922474 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.922505 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:55 crc kubenswrapper[4706]: E1125 11:37:55.922571 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:37:55 crc kubenswrapper[4706]: I1125 11:37:55.922577 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:37:55 crc kubenswrapper[4706]: E1125 11:37:55.922765 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:37:55 crc kubenswrapper[4706]: E1125 11:37:55.922794 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.018048 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.018099 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.018110 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.018127 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.018140 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:56Z","lastTransitionTime":"2025-11-25T11:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.120814 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.120871 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.120883 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.120902 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.120914 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:56Z","lastTransitionTime":"2025-11-25T11:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.224424 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.224488 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.224502 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.224527 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.224543 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:56Z","lastTransitionTime":"2025-11-25T11:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.326966 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.327000 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.327011 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.327027 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.327037 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:56Z","lastTransitionTime":"2025-11-25T11:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.429268 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.429329 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.429349 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.429365 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.429374 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:56Z","lastTransitionTime":"2025-11-25T11:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.532009 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.532064 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.532080 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.532105 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.532124 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:56Z","lastTransitionTime":"2025-11-25T11:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.635074 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.635144 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.635157 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.635178 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.635191 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:56Z","lastTransitionTime":"2025-11-25T11:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.738569 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.738613 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.738622 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.738640 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.738650 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:56Z","lastTransitionTime":"2025-11-25T11:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.842071 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.842110 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.842118 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.842136 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.842155 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:56Z","lastTransitionTime":"2025-11-25T11:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.944347 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.944411 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.944421 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.944438 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:56 crc kubenswrapper[4706]: I1125 11:37:56.944472 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:56Z","lastTransitionTime":"2025-11-25T11:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.050211 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.050249 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.050260 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.050277 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.050289 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:57Z","lastTransitionTime":"2025-11-25T11:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.153013 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.153058 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.153068 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.153086 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.153097 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:57Z","lastTransitionTime":"2025-11-25T11:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.256337 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.256378 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.256390 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.256412 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.256425 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:57Z","lastTransitionTime":"2025-11-25T11:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.358433 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.358493 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.358503 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.358526 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.358537 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:57Z","lastTransitionTime":"2025-11-25T11:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.461772 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.461818 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.461826 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.461852 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.461870 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:57Z","lastTransitionTime":"2025-11-25T11:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.563776 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.563831 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.563843 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.563860 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.563871 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:57Z","lastTransitionTime":"2025-11-25T11:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.666140 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.666193 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.666207 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.666229 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.666244 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:57Z","lastTransitionTime":"2025-11-25T11:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.768891 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.768956 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.768970 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.768989 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.769009 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:57Z","lastTransitionTime":"2025-11-25T11:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.871099 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.871149 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.871161 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.871182 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.871193 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:57Z","lastTransitionTime":"2025-11-25T11:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.922030 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.922130 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.922052 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.922168 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:37:57 crc kubenswrapper[4706]: E1125 11:37:57.922241 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:37:57 crc kubenswrapper[4706]: E1125 11:37:57.922365 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:37:57 crc kubenswrapper[4706]: E1125 11:37:57.922485 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:37:57 crc kubenswrapper[4706]: E1125 11:37:57.922754 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.973608 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.973664 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.973673 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.973690 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:57 crc kubenswrapper[4706]: I1125 11:37:57.973702 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:57Z","lastTransitionTime":"2025-11-25T11:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.076477 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.076523 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.076534 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.076552 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.076565 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:58Z","lastTransitionTime":"2025-11-25T11:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.179110 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.179157 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.179166 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.179185 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.179199 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:58Z","lastTransitionTime":"2025-11-25T11:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.282420 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.282478 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.282496 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.282522 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.282539 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:58Z","lastTransitionTime":"2025-11-25T11:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.385890 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.385939 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.385948 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.385987 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.386001 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:58Z","lastTransitionTime":"2025-11-25T11:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.488630 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.489070 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.489079 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.489096 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.489106 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:58Z","lastTransitionTime":"2025-11-25T11:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.591867 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.591927 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.591939 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.591959 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.591972 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:58Z","lastTransitionTime":"2025-11-25T11:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.695245 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.695331 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.695352 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.695380 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.695399 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:58Z","lastTransitionTime":"2025-11-25T11:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.798104 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.798153 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.798171 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.798189 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.798199 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:58Z","lastTransitionTime":"2025-11-25T11:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.900799 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.900852 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.900864 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.900884 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:58 crc kubenswrapper[4706]: I1125 11:37:58.900899 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:58Z","lastTransitionTime":"2025-11-25T11:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.003719 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.003762 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.003771 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.003789 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.003799 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:59Z","lastTransitionTime":"2025-11-25T11:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.041383 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.041430 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.041446 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.041465 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.041477 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:59Z","lastTransitionTime":"2025-11-25T11:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:59 crc kubenswrapper[4706]: E1125 11:37:59.054917 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:59Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.059029 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.059085 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.059096 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.059116 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.059131 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:59Z","lastTransitionTime":"2025-11-25T11:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:59 crc kubenswrapper[4706]: E1125 11:37:59.071262 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:59Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.078076 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.078127 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.078136 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.078154 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.078168 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:59Z","lastTransitionTime":"2025-11-25T11:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:59 crc kubenswrapper[4706]: E1125 11:37:59.091791 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:59Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.095281 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.095337 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.095349 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.095369 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.095381 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:59Z","lastTransitionTime":"2025-11-25T11:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:59 crc kubenswrapper[4706]: E1125 11:37:59.108169 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:59Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.111668 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.111701 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.111709 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.111723 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.111736 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:59Z","lastTransitionTime":"2025-11-25T11:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:59 crc kubenswrapper[4706]: E1125 11:37:59.125042 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:37:59Z is after 2025-08-24T17:21:41Z" Nov 25 11:37:59 crc kubenswrapper[4706]: E1125 11:37:59.125211 4706 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.127059 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.127102 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.127111 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.127129 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.127139 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:59Z","lastTransitionTime":"2025-11-25T11:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.229819 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.229854 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.229863 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.229879 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.229889 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:59Z","lastTransitionTime":"2025-11-25T11:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.332468 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.332507 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.332517 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.332532 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.332542 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:59Z","lastTransitionTime":"2025-11-25T11:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.434883 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.434920 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.434929 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.434945 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.434961 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:59Z","lastTransitionTime":"2025-11-25T11:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.537591 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.537658 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.537682 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.537711 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.537731 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:59Z","lastTransitionTime":"2025-11-25T11:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.640993 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.641060 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.641071 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.641092 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.641107 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:59Z","lastTransitionTime":"2025-11-25T11:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.743514 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.743573 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.743586 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.743610 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.743624 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:59Z","lastTransitionTime":"2025-11-25T11:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.846169 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.846209 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.846218 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.846233 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.846242 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:59Z","lastTransitionTime":"2025-11-25T11:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.922277 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.922429 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.922478 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.922504 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:37:59 crc kubenswrapper[4706]: E1125 11:37:59.922736 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:37:59 crc kubenswrapper[4706]: E1125 11:37:59.922795 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:37:59 crc kubenswrapper[4706]: E1125 11:37:59.922881 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:37:59 crc kubenswrapper[4706]: E1125 11:37:59.923010 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.949168 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.949225 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.949236 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.949254 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:37:59 crc kubenswrapper[4706]: I1125 11:37:59.949265 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:37:59Z","lastTransitionTime":"2025-11-25T11:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.052408 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.052460 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.052469 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.052492 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.052503 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:00Z","lastTransitionTime":"2025-11-25T11:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.155010 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.155063 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.155073 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.155095 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.155115 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:00Z","lastTransitionTime":"2025-11-25T11:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.257037 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.257089 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.257097 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.257113 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.257126 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:00Z","lastTransitionTime":"2025-11-25T11:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.359771 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.359846 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.359857 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.359875 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.359886 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:00Z","lastTransitionTime":"2025-11-25T11:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.462071 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.462148 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.462168 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.462195 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.462214 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:00Z","lastTransitionTime":"2025-11-25T11:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.565873 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.565938 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.565950 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.565971 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.565985 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:00Z","lastTransitionTime":"2025-11-25T11:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.668632 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.668679 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.668696 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.668712 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.668722 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:00Z","lastTransitionTime":"2025-11-25T11:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.772041 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.772089 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.772097 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.772114 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.772126 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:00Z","lastTransitionTime":"2025-11-25T11:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.875073 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.875126 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.875141 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.875159 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.875172 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:00Z","lastTransitionTime":"2025-11-25T11:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.978346 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.978404 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.978418 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.978439 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:00 crc kubenswrapper[4706]: I1125 11:38:00.978452 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:00Z","lastTransitionTime":"2025-11-25T11:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.082834 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.082879 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.082893 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.082916 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.082929 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:01Z","lastTransitionTime":"2025-11-25T11:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.185507 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.185945 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.186043 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.186138 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.186217 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:01Z","lastTransitionTime":"2025-11-25T11:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.289701 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.290158 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.290242 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.290357 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.290422 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:01Z","lastTransitionTime":"2025-11-25T11:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.393475 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.393875 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.393950 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.394017 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.394075 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:01Z","lastTransitionTime":"2025-11-25T11:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.496845 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.496898 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.496916 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.496935 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.496948 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:01Z","lastTransitionTime":"2025-11-25T11:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.600180 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.600242 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.600253 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.600272 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.600285 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:01Z","lastTransitionTime":"2025-11-25T11:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.703480 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.703519 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.703530 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.703545 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.703554 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:01Z","lastTransitionTime":"2025-11-25T11:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.806086 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.806163 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.806176 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.806201 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.806214 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:01Z","lastTransitionTime":"2025-11-25T11:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.909001 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.909051 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.909062 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.909080 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.909092 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:01Z","lastTransitionTime":"2025-11-25T11:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.922063 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.922209 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.922411 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:38:01 crc kubenswrapper[4706]: E1125 11:38:01.922409 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.922648 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:38:01 crc kubenswrapper[4706]: E1125 11:38:01.922632 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:38:01 crc kubenswrapper[4706]: E1125 11:38:01.922774 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:38:01 crc kubenswrapper[4706]: E1125 11:38:01.922841 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.937950 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b156f76-9878-4527-95c5-27adfffbcd87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a8135a692a512f05f3a902977e8b7a505d8346fb6e96c26ffc58d075e902c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7224a1c52df964a792e6197a4f97313b139ffbd6d65820d93e36561e817ddc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78068d04cf52a463ca3595227c44918d360266c71afc97c1792e48b004bebe42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0299d89c1a2ea9c2a4bb46691aecd2d86618d3620e7406e1af57e1c03ce50b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0299d89c1a2ea9c2a4bb46691aecd2d86618d3620e7406e1af57e1c03ce50b94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.959838 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.975757 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:01 crc kubenswrapper[4706]: I1125 11:38:01.993703 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de18c07bf8490d7495947e9a271e3e7273b9ffdcc43afd2a0468394af0ae0b0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:01Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.010171 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l99rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14d69237-a4b7-43ea-ac81-f165eb532669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l99rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.010699 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.010868 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.010891 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.010912 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.010928 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:02Z","lastTransitionTime":"2025-11-25T11:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.027218 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.043017 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.059061 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.075023 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc09de93-57e8-4697-8ce8-70bfc1b693e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daff2070c60f609fd06be9589e3cd8d304d131f7b9669c7be4b8e9178df8f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39eec3aac772cc9463505277d6b3f7cf2eb7621e4add4f14e53110e3db8c4cdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qkkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.092128 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.106746 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ec2e656-a68d-4339-92d5-0c157f7f7783\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a1481dd8cb88b79d8addfbfd40caf18850769e4492c2af316105b7f6779f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w54mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.113565 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.113628 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.113641 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.113662 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.113678 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:02Z","lastTransitionTime":"2025-11-25T11:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.128121 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:46Z\\\",\\\"message\\\":\\\"licy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.176],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1125 11:37:46.833085 6714 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler-operator/metrics]} name:Service_openshift-kube-scheduler-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.233:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1dc899db-4498-4b7a-8437-861940b962e7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1125 11:37:46.833121 6714 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-q9rpr_openshift-ovn-kubernetes(f1218bae-4153-4490-8847-ab2d07ca0ab6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.142569 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.156777 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27ae65a2-2109-4ce8-a927-ad8b8cff1aae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f97c784f83c5f2d1cfce3f39f43a832fa8da73add257ae9c39f001bbfe3999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a03748c4ae77a0195537510fbf39f425fb59b820b719972a26c1cbaa4e1faa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a03748c4ae77a0195537510fbf39f425fb59b820b719972a26c1cbaa4e1faa0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.173379 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.191738 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.206855 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.216276 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.216356 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.216371 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.216394 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.216412 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:02Z","lastTransitionTime":"2025-11-25T11:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.222059 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.243059 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8831e77983548cfffd56f81ff9f25b90d70dfb71b47b545af370b0a813fa19a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:43Z\\\",\\\"message\\\":\\\"2025-11-25T11:36:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_64de4bb2-4e36-445e-91b1-9f500f3480d1\\\\n2025-11-25T11:36:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_64de4bb2-4e36-445e-91b1-9f500f3480d1 to /host/opt/cni/bin/\\\\n2025-11-25T11:36:58Z [verbose] multus-daemon started\\\\n2025-11-25T11:36:58Z [verbose] Readiness Indicator file check\\\\n2025-11-25T11:37:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:02Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.319292 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.319348 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.319358 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.319376 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.319386 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:02Z","lastTransitionTime":"2025-11-25T11:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.422150 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.422217 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.422230 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.422249 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.422261 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:02Z","lastTransitionTime":"2025-11-25T11:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.525446 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.525493 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.525505 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.525523 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.525535 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:02Z","lastTransitionTime":"2025-11-25T11:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.629097 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.629167 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.629177 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.629195 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.629207 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:02Z","lastTransitionTime":"2025-11-25T11:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.732886 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.732949 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.732958 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.732979 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.732994 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:02Z","lastTransitionTime":"2025-11-25T11:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.836130 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.836192 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.836205 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.836232 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.836247 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:02Z","lastTransitionTime":"2025-11-25T11:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.923183 4706 scope.go:117] "RemoveContainer" containerID="a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5" Nov 25 11:38:02 crc kubenswrapper[4706]: E1125 11:38:02.923454 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-q9rpr_openshift-ovn-kubernetes(f1218bae-4153-4490-8847-ab2d07ca0ab6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.939860 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.939948 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.939964 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.940011 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:02 crc kubenswrapper[4706]: I1125 11:38:02.940030 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:02Z","lastTransitionTime":"2025-11-25T11:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.043033 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.043078 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.043093 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.043114 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.043128 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:03Z","lastTransitionTime":"2025-11-25T11:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.145702 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.145755 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.145768 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.145787 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.145800 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:03Z","lastTransitionTime":"2025-11-25T11:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.249216 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.249281 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.249295 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.249338 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.249348 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:03Z","lastTransitionTime":"2025-11-25T11:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.352612 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.352651 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.352662 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.352682 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.352694 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:03Z","lastTransitionTime":"2025-11-25T11:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.456793 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.456835 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.456846 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.456867 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.456879 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:03Z","lastTransitionTime":"2025-11-25T11:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.559513 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.559550 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.559558 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.559575 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.559588 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:03Z","lastTransitionTime":"2025-11-25T11:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.662124 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.662163 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.662174 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.662191 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.662237 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:03Z","lastTransitionTime":"2025-11-25T11:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.765912 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.765967 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.765979 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.765999 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.766015 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:03Z","lastTransitionTime":"2025-11-25T11:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.870003 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.870067 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.870083 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.870128 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.870142 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:03Z","lastTransitionTime":"2025-11-25T11:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.922266 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.922339 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.922278 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.922382 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:38:03 crc kubenswrapper[4706]: E1125 11:38:03.922505 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:38:03 crc kubenswrapper[4706]: E1125 11:38:03.922629 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:38:03 crc kubenswrapper[4706]: E1125 11:38:03.922731 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:38:03 crc kubenswrapper[4706]: E1125 11:38:03.922786 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.972809 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.972860 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.972869 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.972886 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:03 crc kubenswrapper[4706]: I1125 11:38:03.972899 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:03Z","lastTransitionTime":"2025-11-25T11:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.076445 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.076712 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.076800 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.076914 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.077012 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:04Z","lastTransitionTime":"2025-11-25T11:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.179799 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.180101 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.180208 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.180341 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.180432 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:04Z","lastTransitionTime":"2025-11-25T11:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.283505 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.283555 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.283568 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.283588 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.283605 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:04Z","lastTransitionTime":"2025-11-25T11:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.386134 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.386201 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.386214 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.386234 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.386246 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:04Z","lastTransitionTime":"2025-11-25T11:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.489411 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.489472 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.489487 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.489510 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.489521 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:04Z","lastTransitionTime":"2025-11-25T11:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.592550 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.592610 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.592635 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.592662 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.592680 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:04Z","lastTransitionTime":"2025-11-25T11:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.695881 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.695942 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.695953 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.695973 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.695984 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:04Z","lastTransitionTime":"2025-11-25T11:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.798987 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.799033 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.799042 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.799075 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.799087 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:04Z","lastTransitionTime":"2025-11-25T11:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.901865 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.901908 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.901925 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.901945 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:04 crc kubenswrapper[4706]: I1125 11:38:04.901959 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:04Z","lastTransitionTime":"2025-11-25T11:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.004411 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.004473 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.004483 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.004506 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.004525 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:05Z","lastTransitionTime":"2025-11-25T11:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.107922 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.107972 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.107987 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.108007 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.108021 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:05Z","lastTransitionTime":"2025-11-25T11:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.210320 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.210368 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.210383 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.210400 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.210410 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:05Z","lastTransitionTime":"2025-11-25T11:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.313182 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.313234 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.313247 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.313267 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.313279 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:05Z","lastTransitionTime":"2025-11-25T11:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.415764 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.415834 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.415845 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.415863 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.415876 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:05Z","lastTransitionTime":"2025-11-25T11:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.519599 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.519663 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.519676 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.519699 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.519714 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:05Z","lastTransitionTime":"2025-11-25T11:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.622744 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.622788 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.622798 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.622814 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.622824 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:05Z","lastTransitionTime":"2025-11-25T11:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.726131 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.726268 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.726288 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.726335 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.726354 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:05Z","lastTransitionTime":"2025-11-25T11:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.829615 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.829668 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.829681 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.829703 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.829716 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:05Z","lastTransitionTime":"2025-11-25T11:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.921849 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.921967 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.921997 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:38:05 crc kubenswrapper[4706]: E1125 11:38:05.922059 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.922194 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:38:05 crc kubenswrapper[4706]: E1125 11:38:05.922360 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:38:05 crc kubenswrapper[4706]: E1125 11:38:05.922496 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:38:05 crc kubenswrapper[4706]: E1125 11:38:05.922770 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.933627 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.933698 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.933714 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.933740 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:05 crc kubenswrapper[4706]: I1125 11:38:05.933755 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:05Z","lastTransitionTime":"2025-11-25T11:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.037236 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.037293 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.037331 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.037352 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.037365 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:06Z","lastTransitionTime":"2025-11-25T11:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.139451 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.139525 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.139539 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.139581 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.139597 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:06Z","lastTransitionTime":"2025-11-25T11:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.242315 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.242375 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.242384 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.242406 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.242417 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:06Z","lastTransitionTime":"2025-11-25T11:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.345391 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.345439 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.345448 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.345465 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.345475 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:06Z","lastTransitionTime":"2025-11-25T11:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.449397 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.449450 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.449472 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.449504 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.449528 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:06Z","lastTransitionTime":"2025-11-25T11:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.552606 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.552674 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.552687 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.552710 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.552724 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:06Z","lastTransitionTime":"2025-11-25T11:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.655805 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.655849 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.655860 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.655883 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.655895 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:06Z","lastTransitionTime":"2025-11-25T11:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.758704 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.758764 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.758776 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.758796 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.758810 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:06Z","lastTransitionTime":"2025-11-25T11:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.873744 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.873790 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.873801 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.873820 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.873834 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:06Z","lastTransitionTime":"2025-11-25T11:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.977155 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.977212 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.977221 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.977240 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:06 crc kubenswrapper[4706]: I1125 11:38:06.977255 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:06Z","lastTransitionTime":"2025-11-25T11:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.080457 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.080513 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.080526 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.080548 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.080560 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:07Z","lastTransitionTime":"2025-11-25T11:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.183801 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.183862 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.183876 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.183898 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.183914 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:07Z","lastTransitionTime":"2025-11-25T11:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.288237 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.288334 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.288349 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.288373 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.288392 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:07Z","lastTransitionTime":"2025-11-25T11:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.390715 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.390764 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.390775 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.390795 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.390809 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:07Z","lastTransitionTime":"2025-11-25T11:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.493792 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.493863 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.493875 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.493894 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.493908 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:07Z","lastTransitionTime":"2025-11-25T11:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.597255 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.597368 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.597385 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.597427 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.597441 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:07Z","lastTransitionTime":"2025-11-25T11:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.700968 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.701020 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.701031 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.701051 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.701066 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:07Z","lastTransitionTime":"2025-11-25T11:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.804707 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.804750 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.804759 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.804774 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.804783 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:07Z","lastTransitionTime":"2025-11-25T11:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.907895 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.907957 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.907973 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.907994 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.908007 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:07Z","lastTransitionTime":"2025-11-25T11:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.921469 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.921572 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:38:07 crc kubenswrapper[4706]: E1125 11:38:07.921641 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.921469 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:38:07 crc kubenswrapper[4706]: I1125 11:38:07.921735 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:38:07 crc kubenswrapper[4706]: E1125 11:38:07.921814 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:38:07 crc kubenswrapper[4706]: E1125 11:38:07.921944 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:38:07 crc kubenswrapper[4706]: E1125 11:38:07.921990 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.011594 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.011661 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.011682 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.011710 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.011727 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:08Z","lastTransitionTime":"2025-11-25T11:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.114512 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.114563 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.114575 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.114596 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.114609 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:08Z","lastTransitionTime":"2025-11-25T11:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.217068 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.217135 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.217148 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.217171 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.217183 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:08Z","lastTransitionTime":"2025-11-25T11:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.321213 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.321253 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.321264 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.321281 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.321292 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:08Z","lastTransitionTime":"2025-11-25T11:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.424622 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.424678 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.424696 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.424726 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.424744 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:08Z","lastTransitionTime":"2025-11-25T11:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.529199 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.529272 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.529285 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.529358 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.529375 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:08Z","lastTransitionTime":"2025-11-25T11:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.632698 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.632739 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.632748 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.632767 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.632778 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:08Z","lastTransitionTime":"2025-11-25T11:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.736100 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.736189 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.736203 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.736223 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.736238 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:08Z","lastTransitionTime":"2025-11-25T11:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.838325 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.838361 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.838370 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.838387 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.838399 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:08Z","lastTransitionTime":"2025-11-25T11:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.941800 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.941849 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.941862 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.941888 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:08 crc kubenswrapper[4706]: I1125 11:38:08.941904 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:08Z","lastTransitionTime":"2025-11-25T11:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.044267 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.044320 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.044331 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.044349 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.044360 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:09Z","lastTransitionTime":"2025-11-25T11:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.147082 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.147118 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.147127 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.147142 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.147154 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:09Z","lastTransitionTime":"2025-11-25T11:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.174943 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.174995 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.175007 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.175025 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.175038 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:09Z","lastTransitionTime":"2025-11-25T11:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:09 crc kubenswrapper[4706]: E1125 11:38:09.188082 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:09Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.191389 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.191426 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.191437 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.191457 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.191468 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:09Z","lastTransitionTime":"2025-11-25T11:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:09 crc kubenswrapper[4706]: E1125 11:38:09.203200 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:09Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.213122 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.213190 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.213202 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.213221 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.213672 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:09Z","lastTransitionTime":"2025-11-25T11:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:09 crc kubenswrapper[4706]: E1125 11:38:09.228250 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:09Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.231912 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.231945 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.231954 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.231969 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.231979 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:09Z","lastTransitionTime":"2025-11-25T11:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:09 crc kubenswrapper[4706]: E1125 11:38:09.245270 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:09Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.249371 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.249668 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.249753 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.249846 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.249928 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:09Z","lastTransitionTime":"2025-11-25T11:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:09 crc kubenswrapper[4706]: E1125 11:38:09.264434 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:09Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:09 crc kubenswrapper[4706]: E1125 11:38:09.264596 4706 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.266312 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.266347 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.266389 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.266411 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.266423 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:09Z","lastTransitionTime":"2025-11-25T11:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.369085 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.369144 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.369157 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.369179 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.369195 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:09Z","lastTransitionTime":"2025-11-25T11:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.471325 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.471370 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.471387 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.471412 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.471427 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:09Z","lastTransitionTime":"2025-11-25T11:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.573848 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.573901 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.573916 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.573935 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.573948 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:09Z","lastTransitionTime":"2025-11-25T11:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.677371 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.677439 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.677452 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.677480 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.677494 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:09Z","lastTransitionTime":"2025-11-25T11:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.780852 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.780908 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.780920 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.780936 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.780948 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:09Z","lastTransitionTime":"2025-11-25T11:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.883886 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.883931 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.883941 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.883956 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.883966 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:09Z","lastTransitionTime":"2025-11-25T11:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.921863 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.921968 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.922015 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.921982 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:38:09 crc kubenswrapper[4706]: E1125 11:38:09.922141 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:38:09 crc kubenswrapper[4706]: E1125 11:38:09.922284 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:38:09 crc kubenswrapper[4706]: E1125 11:38:09.922425 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:38:09 crc kubenswrapper[4706]: E1125 11:38:09.922565 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.987270 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.987388 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.987403 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.987425 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:09 crc kubenswrapper[4706]: I1125 11:38:09.987441 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:09Z","lastTransitionTime":"2025-11-25T11:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.090632 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.090700 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.090714 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.090734 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.090749 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:10Z","lastTransitionTime":"2025-11-25T11:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.193826 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.193863 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.193871 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.193886 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.193895 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:10Z","lastTransitionTime":"2025-11-25T11:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.296918 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.296963 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.296972 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.296991 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.297004 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:10Z","lastTransitionTime":"2025-11-25T11:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.398933 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.399012 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.399026 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.399046 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.399068 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:10Z","lastTransitionTime":"2025-11-25T11:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.502166 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.502225 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.502237 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.502260 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.502272 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:10Z","lastTransitionTime":"2025-11-25T11:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.605962 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.606036 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.606049 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.606079 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.606101 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:10Z","lastTransitionTime":"2025-11-25T11:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.709222 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.709382 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.709395 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.709419 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.709431 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:10Z","lastTransitionTime":"2025-11-25T11:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.812738 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.812788 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.812798 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.812816 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.812826 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:10Z","lastTransitionTime":"2025-11-25T11:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.917096 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.917166 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.917177 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.917197 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:10 crc kubenswrapper[4706]: I1125 11:38:10.917213 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:10Z","lastTransitionTime":"2025-11-25T11:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.019864 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.019911 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.019923 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.019944 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.019958 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:11Z","lastTransitionTime":"2025-11-25T11:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.123521 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.123594 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.123609 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.123634 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.123648 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:11Z","lastTransitionTime":"2025-11-25T11:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.227037 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.227083 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.227097 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.227116 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.227128 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:11Z","lastTransitionTime":"2025-11-25T11:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.330560 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.330602 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.330614 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.330632 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.330645 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:11Z","lastTransitionTime":"2025-11-25T11:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.433775 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.433820 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.433831 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.433849 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.433861 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:11Z","lastTransitionTime":"2025-11-25T11:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.536574 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.536837 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.536849 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.536871 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.536884 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:11Z","lastTransitionTime":"2025-11-25T11:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.640492 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.640585 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.640597 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.640643 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.640661 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:11Z","lastTransitionTime":"2025-11-25T11:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.683212 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/14d69237-a4b7-43ea-ac81-f165eb532669-metrics-certs\") pod \"network-metrics-daemon-l99rd\" (UID: \"14d69237-a4b7-43ea-ac81-f165eb532669\") " pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:38:11 crc kubenswrapper[4706]: E1125 11:38:11.683496 4706 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 11:38:11 crc kubenswrapper[4706]: E1125 11:38:11.683627 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14d69237-a4b7-43ea-ac81-f165eb532669-metrics-certs podName:14d69237-a4b7-43ea-ac81-f165eb532669 nodeName:}" failed. No retries permitted until 2025-11-25 11:39:15.683595796 +0000 UTC m=+164.598153217 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/14d69237-a4b7-43ea-ac81-f165eb532669-metrics-certs") pod "network-metrics-daemon-l99rd" (UID: "14d69237-a4b7-43ea-ac81-f165eb532669") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.743667 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.743724 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.743737 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.743757 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.743768 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:11Z","lastTransitionTime":"2025-11-25T11:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.846774 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.846827 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.846838 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.846864 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.846878 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:11Z","lastTransitionTime":"2025-11-25T11:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.922058 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.922122 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:38:11 crc kubenswrapper[4706]: E1125 11:38:11.922421 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.922493 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.922547 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:38:11 crc kubenswrapper[4706]: E1125 11:38:11.922617 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:38:11 crc kubenswrapper[4706]: E1125 11:38:11.922740 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:38:11 crc kubenswrapper[4706]: E1125 11:38:11.922835 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.938780 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b156f76-9878-4527-95c5-27adfffbcd87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a8135a692a512f05f3a902977e8b7a505d8346fb6e96c26ffc58d075e902c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7224a1c52df964a792e6197a4f97313b139ffbd6d65820d93e36561e817ddc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78068d04cf52a463ca3595227c44918d360266c71afc97c1792e48b004bebe42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0299d89c1a2ea9c2a4bb46691aecd2d86618d3620e7406e1af57e1c03ce50b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0299d89c1a2ea9c2a4bb46691aecd2d86618d3620e7406e1af57e1c03ce50b94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:11Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.949711 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.949759 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.949769 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.949787 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.949798 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:11Z","lastTransitionTime":"2025-11-25T11:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.968051 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21277b4b-1e5d-4345-ba2a-39957194f021\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e336808761e1c6c5eaa04fd06cbb4d0c0384a2cbd3dfd4c1b3a877e7e0f0c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfaf9f13d49eb5c52817b0d082263791cc1dca82a23282452f1393dd693ca27a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://634b7b0df29329562f6ead9641186eee129945efc5a2d784ff6474d213b2baea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b3642576d5ecf314b809b90f8a76244e5ea54178f78729eb6521b09b7daa9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b63b9c87fed8e56acef62af3c5b75cf637a058ada9dd8ef5afc317e99e12162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adfea6c3d1d62c9ce2656cae203bccf32ab19165305db3731e3db92915dd7d4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29e8fd847ab683471a692fda5b7c7d6105db11c5aecf09bb23080c58cd97c06a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4b1f85fd0291c239854093c0b26f0802291cbe4c4bab384fc69a5d21165343a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:11Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:11 crc kubenswrapper[4706]: I1125 11:38:11.984554 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23abd4bcc68d2a090882edb55d0e8569032affe5f4ebf05279e18ba3e9f9d8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a068e34d29a7f39157ffd6e364ce643f5280f5184c13a281043247117d451364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:11Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.003262 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150b96fa-570a-4b32-a82a-3275127d5b51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de18c07bf8490d7495947e9a271e3e7273b9ffdcc43afd2a0468394af0ae0b0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9f9981b5f064aa5b007f4b2a2ecdc7f783e1a33e73b9e8b157eccfc54e93ff6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e1e9db3e634932b935a1eb04923d02faf743f2831039edeba41d172ea6d8c52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cee50b6983d9c650efbb5959311b6c33c2e0e2ff504fceadc8ff807f368c36e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29281b46d740a7e527313a667c3896430eb51ba2c50c5e406fb94d8959dbe855\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0ff2d1408b3b635ada726fc15a15472d3fd7c61e21ffe0379d137fdd543c436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3b94746fe10e0f9375491a41d10973d2576eb69f0883cef3ef0132efb0e8fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2ml6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cjmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:12Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.017333 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l99rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14d69237-a4b7-43ea-ac81-f165eb532669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmr9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l99rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:12Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.033619 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce0e2e75-834b-46fb-bc84-229e60f904b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\" shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1125 11:36:51.292762 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1125 11:36:51.292767 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1125 11:36:51.292853 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1125 11:36:51.292876 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1125 11:36:51.293041 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764070595\\\\\\\\\\\\\\\" (2025-11-25 11:36:34 +0000 UTC to 2025-12-25 11:36:35 +0000 UTC (now=2025-11-25 11:36:51.29301304 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293171 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230105117/tls.crt::/tmp/serving-cert-1230105117/tls.key\\\\\\\"\\\\nI1125 11:36:51.293210 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764070605\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764070605\\\\\\\\\\\\\\\" (2025-11-25 10:36:45 +0000 UTC to 2026-11-25 10:36:45 +0000 UTC (now=2025-11-25 11:36:51.293188774 +0000 UTC))\\\\\\\"\\\\nI1125 11:36:51.293233 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1125 11:36:51.293259 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1125 11:36:51.293279 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1125 11:36:51.293378 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:12Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.048553 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0930887a-320c-4506-8c9c-f94d6d64516a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736e37ff944f81ac9808ff8a76d36837aeabc76a4c08bbeba3f707616e1f0884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7sgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhfpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:12Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.053095 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.053155 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.053168 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.053210 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.053227 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:12Z","lastTransitionTime":"2025-11-25T11:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.063143 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nh9sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7813e79d-885d-4cf1-ac27-039e998473b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea634334242536d35bf36e9078539cad4658b161b61e6051d9bb6d8544e71f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9gvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nh9sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:12Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.077323 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc09de93-57e8-4697-8ce8-70bfc1b693e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6daff2070c60f609fd06be9589e3cd8d304d131f7b9669c7be4b8e9178df8f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39eec3aac772cc9463505277d6b3f7cf2eb7621e4add4f14e53110e3db8c4cdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmrl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:37:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qkkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:12Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.091459 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad79bed891e80837fc120b01cb2b41a16493f2f5281c83a6bb489cc17c6da995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:12Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.106734 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lpc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ec2e656-a68d-4339-92d5-0c157f7f7783\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a1481dd8cb88b79d8addfbfd40caf18850769e4492c2af316105b7f6779f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w54mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lpc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:12Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.121074 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:12Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.137122 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s47nr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9912058e-28f5-4cec-9eeb-03e37e0dc5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:37:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8831e77983548cfffd56f81ff9f25b90d70dfb71b47b545af370b0a813fa19a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:43Z\\\",\\\"message\\\":\\\"2025-11-25T11:36:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_64de4bb2-4e36-445e-91b1-9f500f3480d1\\\\n2025-11-25T11:36:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_64de4bb2-4e36-445e-91b1-9f500f3480d1 to /host/opt/cni/bin/\\\\n2025-11-25T11:36:58Z [verbose] multus-daemon started\\\\n2025-11-25T11:36:58Z [verbose] Readiness Indicator file check\\\\n2025-11-25T11:37:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:37:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wfqx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s47nr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:12Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.156417 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.156477 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.156488 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.156509 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.156523 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:12Z","lastTransitionTime":"2025-11-25T11:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.161339 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1218bae-4153-4490-8847-ab2d07ca0ab6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T11:37:46Z\\\",\\\"message\\\":\\\"licy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.176],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1125 11:37:46.833085 6714 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler-operator/metrics]} name:Service_openshift-kube-scheduler-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.233:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1dc899db-4498-4b7a-8437-861940b962e7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1125 11:37:46.833121 6714 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T11:37:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-q9rpr_openshift-ovn-kubernetes(f1218bae-4153-4490-8847-ab2d07ca0ab6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b55sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9rpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:12Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.177260 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"363ff191-6229-47e9-a7d0-1c72f21e7c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71b496da1a81efbb50a84766e610a6b03e032a4e2cb5a71191395ffb85f6b1f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab8621c83015577b9039ac2ba9ce46f8b29f66d77da31a02d179132d923741bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4d0ce4e175dd8da8d15b26e60ced87ee11dc8079ce730cfbdce1b3f4f08b1d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:12Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.190271 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27ae65a2-2109-4ce8-a927-ad8b8cff1aae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f97c784f83c5f2d1cfce3f39f43a832fa8da73add257ae9c39f001bbfe3999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a03748c4ae77a0195537510fbf39f425fb59b820b719972a26c1cbaa4e1faa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a03748c4ae77a0195537510fbf39f425fb59b820b719972a26c1cbaa4e1faa0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T11:36:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T11:36:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T11:36:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:12Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.203447 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998291d5af3be798ff4e2f00d043f615e086fef44e541071bbaf781983955ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T11:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:12Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.218114 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:12Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.232164 4706 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T11:36:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:12Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.259229 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.259265 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.259275 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.259291 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.259325 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:12Z","lastTransitionTime":"2025-11-25T11:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.362766 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.362824 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.362835 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.362855 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.362871 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:12Z","lastTransitionTime":"2025-11-25T11:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.466603 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.466670 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.466683 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.466704 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.466718 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:12Z","lastTransitionTime":"2025-11-25T11:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.569694 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.569748 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.569761 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.569780 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.569793 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:12Z","lastTransitionTime":"2025-11-25T11:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.673175 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.673343 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.673361 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.673382 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.673395 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:12Z","lastTransitionTime":"2025-11-25T11:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.776369 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.776452 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.776465 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.776485 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.776498 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:12Z","lastTransitionTime":"2025-11-25T11:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.880048 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.880108 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.880125 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.880147 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.880161 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:12Z","lastTransitionTime":"2025-11-25T11:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.983044 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.983090 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.983103 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.983125 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:12 crc kubenswrapper[4706]: I1125 11:38:12.983138 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:12Z","lastTransitionTime":"2025-11-25T11:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.086075 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.086147 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.086159 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.086178 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.086191 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:13Z","lastTransitionTime":"2025-11-25T11:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.188777 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.188817 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.188826 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.188841 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.188850 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:13Z","lastTransitionTime":"2025-11-25T11:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.291798 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.291853 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.291866 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.291888 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.291899 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:13Z","lastTransitionTime":"2025-11-25T11:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.394877 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.394972 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.394986 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.395032 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.395051 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:13Z","lastTransitionTime":"2025-11-25T11:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.498202 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.498259 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.498270 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.498291 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.498326 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:13Z","lastTransitionTime":"2025-11-25T11:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.601208 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.601290 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.601358 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.601394 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.601417 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:13Z","lastTransitionTime":"2025-11-25T11:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.704729 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.704785 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.704823 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.704847 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.704861 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:13Z","lastTransitionTime":"2025-11-25T11:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.807772 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.807827 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.807847 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.807868 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.807883 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:13Z","lastTransitionTime":"2025-11-25T11:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.911907 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.911958 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.911973 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.911994 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.912010 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:13Z","lastTransitionTime":"2025-11-25T11:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.921250 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.921290 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.921270 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:38:13 crc kubenswrapper[4706]: I1125 11:38:13.921445 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:38:13 crc kubenswrapper[4706]: E1125 11:38:13.921411 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:38:13 crc kubenswrapper[4706]: E1125 11:38:13.921531 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:38:13 crc kubenswrapper[4706]: E1125 11:38:13.921734 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:38:13 crc kubenswrapper[4706]: E1125 11:38:13.921762 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.014994 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.015043 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.015053 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.015070 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.015083 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:14Z","lastTransitionTime":"2025-11-25T11:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.118499 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.118665 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.119022 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.119330 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.119370 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:14Z","lastTransitionTime":"2025-11-25T11:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.222017 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.222062 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.222074 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.222097 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.222116 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:14Z","lastTransitionTime":"2025-11-25T11:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.325139 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.325236 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.325250 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.325274 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.325288 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:14Z","lastTransitionTime":"2025-11-25T11:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.428450 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.428493 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.428505 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.428525 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.428539 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:14Z","lastTransitionTime":"2025-11-25T11:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.531494 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.531583 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.531599 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.531618 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.531628 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:14Z","lastTransitionTime":"2025-11-25T11:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.634976 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.635040 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.635050 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.635071 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.635087 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:14Z","lastTransitionTime":"2025-11-25T11:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.738115 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.738189 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.738209 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.738232 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.738246 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:14Z","lastTransitionTime":"2025-11-25T11:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.841403 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.841447 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.841458 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.841476 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.841489 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:14Z","lastTransitionTime":"2025-11-25T11:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.922974 4706 scope.go:117] "RemoveContainer" containerID="a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5" Nov 25 11:38:14 crc kubenswrapper[4706]: E1125 11:38:14.923203 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-q9rpr_openshift-ovn-kubernetes(f1218bae-4153-4490-8847-ab2d07ca0ab6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.944447 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.944497 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.944506 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.944528 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:14 crc kubenswrapper[4706]: I1125 11:38:14.944539 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:14Z","lastTransitionTime":"2025-11-25T11:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.047193 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.047245 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.047254 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.047271 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.047281 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:15Z","lastTransitionTime":"2025-11-25T11:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.149477 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.149540 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.149553 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.149574 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.149587 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:15Z","lastTransitionTime":"2025-11-25T11:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.252609 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.252662 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.252672 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.252692 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.252704 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:15Z","lastTransitionTime":"2025-11-25T11:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.361677 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.361764 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.361783 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.361805 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.361820 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:15Z","lastTransitionTime":"2025-11-25T11:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.465292 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.465363 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.465374 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.465393 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.465408 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:15Z","lastTransitionTime":"2025-11-25T11:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.569587 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.569651 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.569664 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.569686 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.569700 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:15Z","lastTransitionTime":"2025-11-25T11:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.673211 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.673347 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.673358 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.673377 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.673388 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:15Z","lastTransitionTime":"2025-11-25T11:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.776689 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.776750 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.776763 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.776787 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.776803 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:15Z","lastTransitionTime":"2025-11-25T11:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.878835 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.878894 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.878906 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.878935 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.878977 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:15Z","lastTransitionTime":"2025-11-25T11:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.922077 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.922179 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.922207 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:38:15 crc kubenswrapper[4706]: E1125 11:38:15.922369 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.922387 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:38:15 crc kubenswrapper[4706]: E1125 11:38:15.922484 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:38:15 crc kubenswrapper[4706]: E1125 11:38:15.922498 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:38:15 crc kubenswrapper[4706]: E1125 11:38:15.922571 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.982760 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.982827 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.982841 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.982862 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:15 crc kubenswrapper[4706]: I1125 11:38:15.982872 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:15Z","lastTransitionTime":"2025-11-25T11:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.085334 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.085387 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.085399 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.085418 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.085433 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:16Z","lastTransitionTime":"2025-11-25T11:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.188694 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.188760 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.188773 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.188791 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.188801 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:16Z","lastTransitionTime":"2025-11-25T11:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.292210 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.292354 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.292388 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.292424 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.292460 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:16Z","lastTransitionTime":"2025-11-25T11:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.395342 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.395399 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.395411 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.395431 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.395444 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:16Z","lastTransitionTime":"2025-11-25T11:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.499240 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.499327 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.499340 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.499361 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.499373 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:16Z","lastTransitionTime":"2025-11-25T11:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.602541 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.602599 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.602609 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.602627 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.602642 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:16Z","lastTransitionTime":"2025-11-25T11:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.705505 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.705557 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.705568 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.705590 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.705601 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:16Z","lastTransitionTime":"2025-11-25T11:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.808993 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.809069 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.809088 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.809113 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.809129 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:16Z","lastTransitionTime":"2025-11-25T11:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.911946 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.912002 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.912018 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.912038 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:16 crc kubenswrapper[4706]: I1125 11:38:16.912052 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:16Z","lastTransitionTime":"2025-11-25T11:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.015642 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.015706 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.015720 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.015740 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.015755 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:17Z","lastTransitionTime":"2025-11-25T11:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.118415 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.118475 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.118492 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.118516 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.118528 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:17Z","lastTransitionTime":"2025-11-25T11:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.221170 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.221225 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.221240 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.221259 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.221270 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:17Z","lastTransitionTime":"2025-11-25T11:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.324841 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.325265 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.325280 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.325333 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.325347 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:17Z","lastTransitionTime":"2025-11-25T11:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.427805 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.427847 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.427857 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.427875 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.427886 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:17Z","lastTransitionTime":"2025-11-25T11:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.531177 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.531240 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.531252 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.531274 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.531290 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:17Z","lastTransitionTime":"2025-11-25T11:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.633818 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.633869 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.633879 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.633897 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.633913 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:17Z","lastTransitionTime":"2025-11-25T11:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.736579 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.736681 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.736694 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.736714 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.736730 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:17Z","lastTransitionTime":"2025-11-25T11:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.839716 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.839759 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.839767 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.839786 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.839799 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:17Z","lastTransitionTime":"2025-11-25T11:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.921804 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.921892 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.921960 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:38:17 crc kubenswrapper[4706]: E1125 11:38:17.922000 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:38:17 crc kubenswrapper[4706]: E1125 11:38:17.922143 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.922210 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:38:17 crc kubenswrapper[4706]: E1125 11:38:17.922268 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:38:17 crc kubenswrapper[4706]: E1125 11:38:17.922359 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.942770 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.942830 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.942842 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.942863 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:17 crc kubenswrapper[4706]: I1125 11:38:17.942876 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:17Z","lastTransitionTime":"2025-11-25T11:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.046279 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.046382 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.046396 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.046414 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.046425 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:18Z","lastTransitionTime":"2025-11-25T11:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.149851 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.149922 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.149935 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.149961 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.149976 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:18Z","lastTransitionTime":"2025-11-25T11:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.252568 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.252604 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.252615 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.252635 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.252648 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:18Z","lastTransitionTime":"2025-11-25T11:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.355182 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.355239 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.355257 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.355277 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.355289 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:18Z","lastTransitionTime":"2025-11-25T11:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.457849 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.457893 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.457903 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.457921 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.457932 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:18Z","lastTransitionTime":"2025-11-25T11:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.560670 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.560743 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.560758 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.560782 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.560796 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:18Z","lastTransitionTime":"2025-11-25T11:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.663516 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.663562 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.663575 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.663593 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.663606 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:18Z","lastTransitionTime":"2025-11-25T11:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.766124 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.766176 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.766189 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.766213 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.766226 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:18Z","lastTransitionTime":"2025-11-25T11:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.868474 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.868800 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.868872 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.868940 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.869001 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:18Z","lastTransitionTime":"2025-11-25T11:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.972012 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.972052 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.972063 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.972082 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:18 crc kubenswrapper[4706]: I1125 11:38:18.972095 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:18Z","lastTransitionTime":"2025-11-25T11:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.074392 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.074758 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.074869 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.074970 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.075056 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:19Z","lastTransitionTime":"2025-11-25T11:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.178038 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.178360 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.178470 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.178594 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.178764 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:19Z","lastTransitionTime":"2025-11-25T11:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.281168 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.281527 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.281657 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.281781 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.281867 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:19Z","lastTransitionTime":"2025-11-25T11:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.384341 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.384417 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.384429 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.384449 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.384466 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:19Z","lastTransitionTime":"2025-11-25T11:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.487183 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.487674 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.487764 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.487847 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.487988 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:19Z","lastTransitionTime":"2025-11-25T11:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.540527 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.540583 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.540596 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.540616 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.540630 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:19Z","lastTransitionTime":"2025-11-25T11:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:19 crc kubenswrapper[4706]: E1125 11:38:19.555748 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:19Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.560526 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.560595 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.560610 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.560628 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.560641 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:19Z","lastTransitionTime":"2025-11-25T11:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:19 crc kubenswrapper[4706]: E1125 11:38:19.574009 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:19Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.579607 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.579665 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.579675 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.579696 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.579708 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:19Z","lastTransitionTime":"2025-11-25T11:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:19 crc kubenswrapper[4706]: E1125 11:38:19.595868 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:19Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.601113 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.601167 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.601182 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.601207 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.601218 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:19Z","lastTransitionTime":"2025-11-25T11:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:19 crc kubenswrapper[4706]: E1125 11:38:19.617477 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:19Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.622604 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.622930 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.623029 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.623129 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.623281 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:19Z","lastTransitionTime":"2025-11-25T11:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:19 crc kubenswrapper[4706]: E1125 11:38:19.638945 4706 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T11:38:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30198dc8-e58c-4847-a541-041da1924c5c\\\",\\\"systemUUID\\\":\\\"7dac62ec-3979-4862-b1af-b63212907795\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T11:38:19Z is after 2025-08-24T17:21:41Z" Nov 25 11:38:19 crc kubenswrapper[4706]: E1125 11:38:19.639657 4706 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.643059 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.643603 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.643705 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.643798 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.643892 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:19Z","lastTransitionTime":"2025-11-25T11:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.750424 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.750478 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.750486 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.750508 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.750522 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:19Z","lastTransitionTime":"2025-11-25T11:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.853575 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.853631 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.853642 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.853661 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.853673 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:19Z","lastTransitionTime":"2025-11-25T11:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.921612 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:38:19 crc kubenswrapper[4706]: E1125 11:38:19.922241 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.922394 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.922445 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.922707 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:38:19 crc kubenswrapper[4706]: E1125 11:38:19.922773 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:38:19 crc kubenswrapper[4706]: E1125 11:38:19.922949 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:38:19 crc kubenswrapper[4706]: E1125 11:38:19.924189 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.956085 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.956151 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.956205 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.956226 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:19 crc kubenswrapper[4706]: I1125 11:38:19.956238 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:19Z","lastTransitionTime":"2025-11-25T11:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.060030 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.060095 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.060107 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.060128 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.060142 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:20Z","lastTransitionTime":"2025-11-25T11:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.162714 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.162775 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.162788 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.162810 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.162823 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:20Z","lastTransitionTime":"2025-11-25T11:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.265331 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.265368 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.265396 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.265446 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.265456 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:20Z","lastTransitionTime":"2025-11-25T11:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.368485 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.368558 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.368571 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.368589 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.368602 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:20Z","lastTransitionTime":"2025-11-25T11:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.471953 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.471997 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.472006 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.472022 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.472033 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:20Z","lastTransitionTime":"2025-11-25T11:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.574584 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.574632 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.574643 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.574662 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.574677 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:20Z","lastTransitionTime":"2025-11-25T11:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.677440 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.677497 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.677509 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.677531 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.677541 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:20Z","lastTransitionTime":"2025-11-25T11:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.780868 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.780931 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.780947 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.780965 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.780978 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:20Z","lastTransitionTime":"2025-11-25T11:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.883910 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.883978 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.883994 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.884018 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.884033 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:20Z","lastTransitionTime":"2025-11-25T11:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.986959 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.987016 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.987028 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.987047 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:20 crc kubenswrapper[4706]: I1125 11:38:20.987061 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:20Z","lastTransitionTime":"2025-11-25T11:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.089641 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.089678 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.089687 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.089704 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.089714 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:21Z","lastTransitionTime":"2025-11-25T11:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.192526 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.192585 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.192601 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.192621 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.192633 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:21Z","lastTransitionTime":"2025-11-25T11:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.295415 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.295480 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.295502 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.295528 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.295543 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:21Z","lastTransitionTime":"2025-11-25T11:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.398165 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.398219 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.398229 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.398249 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.398262 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:21Z","lastTransitionTime":"2025-11-25T11:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.501282 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.501360 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.501373 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.501395 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.501411 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:21Z","lastTransitionTime":"2025-11-25T11:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.603854 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.604252 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.604339 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.604425 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.604496 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:21Z","lastTransitionTime":"2025-11-25T11:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.708026 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.708388 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.708473 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.708545 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.708613 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:21Z","lastTransitionTime":"2025-11-25T11:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.811768 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.811809 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.811820 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.811835 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.811844 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:21Z","lastTransitionTime":"2025-11-25T11:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.914823 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.914862 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.914874 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.914890 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.914939 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:21Z","lastTransitionTime":"2025-11-25T11:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.921387 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.921381 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.921455 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.921643 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:38:21 crc kubenswrapper[4706]: E1125 11:38:21.921641 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:38:21 crc kubenswrapper[4706]: E1125 11:38:21.921809 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:38:21 crc kubenswrapper[4706]: E1125 11:38:21.921928 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:38:21 crc kubenswrapper[4706]: E1125 11:38:21.922004 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.963968 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=90.963915576 podStartE2EDuration="1m30.963915576s" podCreationTimestamp="2025-11-25 11:36:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:21.958891566 +0000 UTC m=+110.873448957" watchObservedRunningTime="2025-11-25 11:38:21.963915576 +0000 UTC m=+110.878472957" Nov 25 11:38:21 crc kubenswrapper[4706]: I1125 11:38:21.975026 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=32.974998873 podStartE2EDuration="32.974998873s" podCreationTimestamp="2025-11-25 11:37:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:21.974871289 +0000 UTC m=+110.889428670" watchObservedRunningTime="2025-11-25 11:38:21.974998873 +0000 UTC m=+110.889556254" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.019607 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.019780 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.019808 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.019830 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.019851 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:22Z","lastTransitionTime":"2025-11-25T11:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.088056 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-s47nr" podStartSLOduration=89.088029177 podStartE2EDuration="1m29.088029177s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:22.06192087 +0000 UTC m=+110.976478251" watchObservedRunningTime="2025-11-25 11:38:22.088029177 +0000 UTC m=+111.002586558" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.104819 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=60.104793318 podStartE2EDuration="1m0.104793318s" podCreationTimestamp="2025-11-25 11:37:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:22.104683824 +0000 UTC m=+111.019241215" watchObservedRunningTime="2025-11-25 11:38:22.104793318 +0000 UTC m=+111.019350699" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.123201 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.123250 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.123266 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.123286 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.123321 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:22Z","lastTransitionTime":"2025-11-25T11:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.133450 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=90.133431285 podStartE2EDuration="1m30.133431285s" podCreationTimestamp="2025-11-25 11:36:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:22.133247548 +0000 UTC m=+111.047804939" watchObservedRunningTime="2025-11-25 11:38:22.133431285 +0000 UTC m=+111.047988666" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.173277 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-cjmvf" podStartSLOduration=89.173254203 podStartE2EDuration="1m29.173254203s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:22.17290356 +0000 UTC m=+111.087460981" watchObservedRunningTime="2025-11-25 11:38:22.173254203 +0000 UTC m=+111.087811584" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.222627 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=90.222600432 podStartE2EDuration="1m30.222600432s" podCreationTimestamp="2025-11-25 11:36:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:22.205934885 +0000 UTC m=+111.120492276" watchObservedRunningTime="2025-11-25 11:38:22.222600432 +0000 UTC m=+111.137157813" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.222923 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podStartSLOduration=89.222918124 podStartE2EDuration="1m29.222918124s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:22.221230263 +0000 UTC m=+111.135787644" watchObservedRunningTime="2025-11-25 11:38:22.222918124 +0000 UTC m=+111.137475505" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.226369 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.226426 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.226442 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.226464 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.226476 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:22Z","lastTransitionTime":"2025-11-25T11:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.239127 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-nh9sc" podStartSLOduration=89.239102334 podStartE2EDuration="1m29.239102334s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:22.238677499 +0000 UTC m=+111.153234880" watchObservedRunningTime="2025-11-25 11:38:22.239102334 +0000 UTC m=+111.153659715" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.257347 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qkkfz" podStartSLOduration=89.257326528 podStartE2EDuration="1m29.257326528s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:22.256753757 +0000 UTC m=+111.171311138" watchObservedRunningTime="2025-11-25 11:38:22.257326528 +0000 UTC m=+111.171883909" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.329252 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.329312 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.329325 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.329342 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.329353 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:22Z","lastTransitionTime":"2025-11-25T11:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.432011 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.432063 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.432076 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.432100 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.432116 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:22Z","lastTransitionTime":"2025-11-25T11:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.535976 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.536033 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.536049 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.536075 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.536096 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:22Z","lastTransitionTime":"2025-11-25T11:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.639559 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.639625 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.639638 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.639659 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.639672 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:22Z","lastTransitionTime":"2025-11-25T11:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.743080 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.743162 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.743178 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.743201 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.743217 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:22Z","lastTransitionTime":"2025-11-25T11:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.847239 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.847336 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.847351 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.847373 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.847391 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:22Z","lastTransitionTime":"2025-11-25T11:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.951049 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.951109 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.951119 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.951138 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:22 crc kubenswrapper[4706]: I1125 11:38:22.951150 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:22Z","lastTransitionTime":"2025-11-25T11:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.054199 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.054264 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.054279 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.054318 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.054335 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:23Z","lastTransitionTime":"2025-11-25T11:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.157948 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.158019 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.158031 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.158048 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.158060 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:23Z","lastTransitionTime":"2025-11-25T11:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.260992 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.261063 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.261081 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.261100 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.261112 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:23Z","lastTransitionTime":"2025-11-25T11:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.363583 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.363635 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.363652 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.363673 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.363686 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:23Z","lastTransitionTime":"2025-11-25T11:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.467112 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.467151 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.467160 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.467176 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.467187 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:23Z","lastTransitionTime":"2025-11-25T11:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.570545 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.570611 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.570622 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.570640 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.570654 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:23Z","lastTransitionTime":"2025-11-25T11:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.673976 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.674028 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.674044 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.674063 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.674076 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:23Z","lastTransitionTime":"2025-11-25T11:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.776377 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.776433 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.776445 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.776466 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.776480 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:23Z","lastTransitionTime":"2025-11-25T11:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.879414 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.879473 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.879485 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.879510 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.879525 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:23Z","lastTransitionTime":"2025-11-25T11:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.921351 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.921351 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.921379 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.921463 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:38:23 crc kubenswrapper[4706]: E1125 11:38:23.921591 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:38:23 crc kubenswrapper[4706]: E1125 11:38:23.921611 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:38:23 crc kubenswrapper[4706]: E1125 11:38:23.921684 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:38:23 crc kubenswrapper[4706]: E1125 11:38:23.921747 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.981865 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.981922 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.981934 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.981955 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:23 crc kubenswrapper[4706]: I1125 11:38:23.981972 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:23Z","lastTransitionTime":"2025-11-25T11:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.085002 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.085060 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.085072 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.085092 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.085106 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:24Z","lastTransitionTime":"2025-11-25T11:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.187938 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.188019 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.188032 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.188055 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.188068 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:24Z","lastTransitionTime":"2025-11-25T11:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.291456 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.291505 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.291516 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.291534 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.291546 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:24Z","lastTransitionTime":"2025-11-25T11:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.394084 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.394195 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.394204 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.394221 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.394231 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:24Z","lastTransitionTime":"2025-11-25T11:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.497658 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.497743 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.497758 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.497793 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.497810 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:24Z","lastTransitionTime":"2025-11-25T11:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.600663 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.600723 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.600734 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.600756 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.600770 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:24Z","lastTransitionTime":"2025-11-25T11:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.703119 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.703155 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.703163 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.703180 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.703193 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:24Z","lastTransitionTime":"2025-11-25T11:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.806957 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.807037 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.807054 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.807077 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.807091 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:24Z","lastTransitionTime":"2025-11-25T11:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.910228 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.910284 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.910326 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.910348 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:24 crc kubenswrapper[4706]: I1125 11:38:24.910361 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:24Z","lastTransitionTime":"2025-11-25T11:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.013238 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.013291 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.013329 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.013350 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.013362 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:25Z","lastTransitionTime":"2025-11-25T11:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.116733 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.116789 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.116798 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.116815 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.116827 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:25Z","lastTransitionTime":"2025-11-25T11:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.219464 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.219528 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.219540 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.219558 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.219572 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:25Z","lastTransitionTime":"2025-11-25T11:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.321693 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.321733 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.321743 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.321759 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.321773 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:25Z","lastTransitionTime":"2025-11-25T11:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.425167 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.425238 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.425253 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.425273 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.425290 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:25Z","lastTransitionTime":"2025-11-25T11:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.529081 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.529134 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.529145 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.529166 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.529177 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:25Z","lastTransitionTime":"2025-11-25T11:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.632511 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.632567 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.632578 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.632599 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.632612 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:25Z","lastTransitionTime":"2025-11-25T11:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.735896 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.735948 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.735964 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.735983 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.735994 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:25Z","lastTransitionTime":"2025-11-25T11:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.838949 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.839006 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.839015 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.839029 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.839039 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:25Z","lastTransitionTime":"2025-11-25T11:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.921738 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.921841 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.921921 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:38:25 crc kubenswrapper[4706]: E1125 11:38:25.921918 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.922002 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:38:25 crc kubenswrapper[4706]: E1125 11:38:25.922149 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:38:25 crc kubenswrapper[4706]: E1125 11:38:25.922227 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:38:25 crc kubenswrapper[4706]: E1125 11:38:25.922360 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.941554 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.941610 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.941623 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.941643 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:25 crc kubenswrapper[4706]: I1125 11:38:25.941657 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:25Z","lastTransitionTime":"2025-11-25T11:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.044767 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.044841 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.044872 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.044894 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.044906 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:26Z","lastTransitionTime":"2025-11-25T11:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.147212 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.147257 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.147272 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.147292 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.147335 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:26Z","lastTransitionTime":"2025-11-25T11:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.249566 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.249613 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.249626 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.249645 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.249658 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:26Z","lastTransitionTime":"2025-11-25T11:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.351768 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.351819 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.351830 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.351847 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.351858 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:26Z","lastTransitionTime":"2025-11-25T11:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.455158 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.455217 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.455229 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.455254 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.455267 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:26Z","lastTransitionTime":"2025-11-25T11:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.558261 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.558325 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.558335 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.558353 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.558364 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:26Z","lastTransitionTime":"2025-11-25T11:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.661375 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.661424 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.661436 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.661453 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.661466 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:26Z","lastTransitionTime":"2025-11-25T11:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.764580 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.764635 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.764648 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.764669 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.764682 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:26Z","lastTransitionTime":"2025-11-25T11:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.867433 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.867471 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.867482 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.867499 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.867511 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:26Z","lastTransitionTime":"2025-11-25T11:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.970456 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.970531 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.970548 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.970571 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:26 crc kubenswrapper[4706]: I1125 11:38:26.970584 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:26Z","lastTransitionTime":"2025-11-25T11:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.073346 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.073391 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.073401 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.073417 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.073430 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:27Z","lastTransitionTime":"2025-11-25T11:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.175954 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.176006 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.176017 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.176036 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.176047 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:27Z","lastTransitionTime":"2025-11-25T11:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.279114 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.279206 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.279220 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.279267 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.279340 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:27Z","lastTransitionTime":"2025-11-25T11:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.383023 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.383083 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.383093 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.383112 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.383125 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:27Z","lastTransitionTime":"2025-11-25T11:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.486057 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.486104 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.486113 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.486131 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.486143 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:27Z","lastTransitionTime":"2025-11-25T11:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.589742 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.589840 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.589853 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.589869 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.589881 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:27Z","lastTransitionTime":"2025-11-25T11:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.692990 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.693052 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.693070 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.693090 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.693103 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:27Z","lastTransitionTime":"2025-11-25T11:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.796593 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.796683 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.796697 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.796721 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.796738 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:27Z","lastTransitionTime":"2025-11-25T11:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.899353 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.899401 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.899417 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.899439 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.899453 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:27Z","lastTransitionTime":"2025-11-25T11:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.922088 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.922282 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.922282 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:38:27 crc kubenswrapper[4706]: E1125 11:38:27.922468 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:38:27 crc kubenswrapper[4706]: E1125 11:38:27.922605 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:38:27 crc kubenswrapper[4706]: E1125 11:38:27.922667 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:38:27 crc kubenswrapper[4706]: I1125 11:38:27.922768 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:38:27 crc kubenswrapper[4706]: E1125 11:38:27.922914 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.002039 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.002087 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.002098 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.002115 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.002127 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:28Z","lastTransitionTime":"2025-11-25T11:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.105064 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.105111 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.105124 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.105141 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.105153 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:28Z","lastTransitionTime":"2025-11-25T11:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.208428 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.208476 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.208486 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.208504 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.208517 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:28Z","lastTransitionTime":"2025-11-25T11:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.311460 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.311519 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.311532 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.311550 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.311562 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:28Z","lastTransitionTime":"2025-11-25T11:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.414468 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.414527 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.414542 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.414563 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.414578 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:28Z","lastTransitionTime":"2025-11-25T11:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.517907 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.517950 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.517959 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.517976 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.517986 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:28Z","lastTransitionTime":"2025-11-25T11:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.620590 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.620686 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.620700 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.620718 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.620729 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:28Z","lastTransitionTime":"2025-11-25T11:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.723468 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.723509 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.723518 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.723532 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.723541 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:28Z","lastTransitionTime":"2025-11-25T11:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.825884 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.825949 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.825987 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.826007 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.826018 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:28Z","lastTransitionTime":"2025-11-25T11:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.929216 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.929281 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.929292 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.929325 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:28 crc kubenswrapper[4706]: I1125 11:38:28.929341 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:28Z","lastTransitionTime":"2025-11-25T11:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.032099 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.032155 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.032165 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.032183 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.032193 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:29Z","lastTransitionTime":"2025-11-25T11:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.135175 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.135244 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.135272 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.135369 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.135395 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:29Z","lastTransitionTime":"2025-11-25T11:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.238401 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.238444 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.238456 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.238475 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.238490 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:29Z","lastTransitionTime":"2025-11-25T11:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.341270 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.341336 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.341353 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.341373 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.341385 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:29Z","lastTransitionTime":"2025-11-25T11:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.444466 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.444526 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.444542 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.444567 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.444582 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:29Z","lastTransitionTime":"2025-11-25T11:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.458996 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s47nr_9912058e-28f5-4cec-9eeb-03e37e0dc5c1/kube-multus/1.log" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.459604 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s47nr_9912058e-28f5-4cec-9eeb-03e37e0dc5c1/kube-multus/0.log" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.459657 4706 generic.go:334] "Generic (PLEG): container finished" podID="9912058e-28f5-4cec-9eeb-03e37e0dc5c1" containerID="8831e77983548cfffd56f81ff9f25b90d70dfb71b47b545af370b0a813fa19a9" exitCode=1 Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.459702 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-s47nr" event={"ID":"9912058e-28f5-4cec-9eeb-03e37e0dc5c1","Type":"ContainerDied","Data":"8831e77983548cfffd56f81ff9f25b90d70dfb71b47b545af370b0a813fa19a9"} Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.459753 4706 scope.go:117] "RemoveContainer" containerID="d03353478b53d9441951702b66365bb3a08ad9c509347472bbb31049851435a4" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.461662 4706 scope.go:117] "RemoveContainer" containerID="8831e77983548cfffd56f81ff9f25b90d70dfb71b47b545af370b0a813fa19a9" Nov 25 11:38:29 crc kubenswrapper[4706]: E1125 11:38:29.461974 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-s47nr_openshift-multus(9912058e-28f5-4cec-9eeb-03e37e0dc5c1)\"" pod="openshift-multus/multus-s47nr" podUID="9912058e-28f5-4cec-9eeb-03e37e0dc5c1" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.479335 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-lpc7s" podStartSLOduration=96.47926454 podStartE2EDuration="1m36.47926454s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:22.285250329 +0000 UTC m=+111.199807720" watchObservedRunningTime="2025-11-25 11:38:29.47926454 +0000 UTC m=+118.393821921" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.548497 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.548988 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.549004 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.549028 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.549043 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:29Z","lastTransitionTime":"2025-11-25T11:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.651511 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.651578 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.651589 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.651611 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.651630 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:29Z","lastTransitionTime":"2025-11-25T11:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.754503 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.754547 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.754567 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.754591 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.754604 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:29Z","lastTransitionTime":"2025-11-25T11:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.793656 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.793731 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.793748 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.793772 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.793785 4706 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T11:38:29Z","lastTransitionTime":"2025-11-25T11:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.847654 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-s95vl"] Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.848083 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s95vl" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.850684 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.850776 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.851131 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.852855 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.922052 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.922069 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.922148 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.922198 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:38:29 crc kubenswrapper[4706]: E1125 11:38:29.922336 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:38:29 crc kubenswrapper[4706]: E1125 11:38:29.922578 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:38:29 crc kubenswrapper[4706]: E1125 11:38:29.922969 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:38:29 crc kubenswrapper[4706]: E1125 11:38:29.923052 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.923477 4706 scope.go:117] "RemoveContainer" containerID="a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.992982 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/5d240203-a986-4b37-9849-10a0b29d7534-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-s95vl\" (UID: \"5d240203-a986-4b37-9849-10a0b29d7534\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s95vl" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.993040 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5d240203-a986-4b37-9849-10a0b29d7534-service-ca\") pod \"cluster-version-operator-5c965bbfc6-s95vl\" (UID: \"5d240203-a986-4b37-9849-10a0b29d7534\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s95vl" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.993069 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d240203-a986-4b37-9849-10a0b29d7534-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-s95vl\" (UID: \"5d240203-a986-4b37-9849-10a0b29d7534\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s95vl" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.993086 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d240203-a986-4b37-9849-10a0b29d7534-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-s95vl\" (UID: \"5d240203-a986-4b37-9849-10a0b29d7534\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s95vl" Nov 25 11:38:29 crc kubenswrapper[4706]: I1125 11:38:29.993119 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/5d240203-a986-4b37-9849-10a0b29d7534-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-s95vl\" (UID: \"5d240203-a986-4b37-9849-10a0b29d7534\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s95vl" Nov 25 11:38:30 crc kubenswrapper[4706]: I1125 11:38:30.094535 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5d240203-a986-4b37-9849-10a0b29d7534-service-ca\") pod \"cluster-version-operator-5c965bbfc6-s95vl\" (UID: \"5d240203-a986-4b37-9849-10a0b29d7534\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s95vl" Nov 25 11:38:30 crc kubenswrapper[4706]: I1125 11:38:30.094741 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d240203-a986-4b37-9849-10a0b29d7534-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-s95vl\" (UID: \"5d240203-a986-4b37-9849-10a0b29d7534\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s95vl" Nov 25 11:38:30 crc kubenswrapper[4706]: I1125 11:38:30.094767 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d240203-a986-4b37-9849-10a0b29d7534-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-s95vl\" (UID: \"5d240203-a986-4b37-9849-10a0b29d7534\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s95vl" Nov 25 11:38:30 crc kubenswrapper[4706]: I1125 11:38:30.094823 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/5d240203-a986-4b37-9849-10a0b29d7534-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-s95vl\" (UID: \"5d240203-a986-4b37-9849-10a0b29d7534\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s95vl" Nov 25 11:38:30 crc kubenswrapper[4706]: I1125 11:38:30.094964 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/5d240203-a986-4b37-9849-10a0b29d7534-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-s95vl\" (UID: \"5d240203-a986-4b37-9849-10a0b29d7534\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s95vl" Nov 25 11:38:30 crc kubenswrapper[4706]: I1125 11:38:30.096152 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5d240203-a986-4b37-9849-10a0b29d7534-service-ca\") pod \"cluster-version-operator-5c965bbfc6-s95vl\" (UID: \"5d240203-a986-4b37-9849-10a0b29d7534\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s95vl" Nov 25 11:38:30 crc kubenswrapper[4706]: I1125 11:38:30.097321 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/5d240203-a986-4b37-9849-10a0b29d7534-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-s95vl\" (UID: \"5d240203-a986-4b37-9849-10a0b29d7534\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s95vl" Nov 25 11:38:30 crc kubenswrapper[4706]: I1125 11:38:30.097888 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/5d240203-a986-4b37-9849-10a0b29d7534-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-s95vl\" (UID: \"5d240203-a986-4b37-9849-10a0b29d7534\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s95vl" Nov 25 11:38:30 crc kubenswrapper[4706]: I1125 11:38:30.103667 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d240203-a986-4b37-9849-10a0b29d7534-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-s95vl\" (UID: \"5d240203-a986-4b37-9849-10a0b29d7534\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s95vl" Nov 25 11:38:30 crc kubenswrapper[4706]: I1125 11:38:30.115793 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d240203-a986-4b37-9849-10a0b29d7534-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-s95vl\" (UID: \"5d240203-a986-4b37-9849-10a0b29d7534\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s95vl" Nov 25 11:38:30 crc kubenswrapper[4706]: I1125 11:38:30.162781 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s95vl" Nov 25 11:38:30 crc kubenswrapper[4706]: W1125 11:38:30.181158 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d240203_a986_4b37_9849_10a0b29d7534.slice/crio-1e7b98256d235826771f34f6521c44a79762c1116e88ec2a19190f61f4ad04fa WatchSource:0}: Error finding container 1e7b98256d235826771f34f6521c44a79762c1116e88ec2a19190f61f4ad04fa: Status 404 returned error can't find the container with id 1e7b98256d235826771f34f6521c44a79762c1116e88ec2a19190f61f4ad04fa Nov 25 11:38:30 crc kubenswrapper[4706]: I1125 11:38:30.464062 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s47nr_9912058e-28f5-4cec-9eeb-03e37e0dc5c1/kube-multus/1.log" Nov 25 11:38:30 crc kubenswrapper[4706]: I1125 11:38:30.466210 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9rpr_f1218bae-4153-4490-8847-ab2d07ca0ab6/ovnkube-controller/3.log" Nov 25 11:38:30 crc kubenswrapper[4706]: I1125 11:38:30.469073 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" event={"ID":"f1218bae-4153-4490-8847-ab2d07ca0ab6","Type":"ContainerStarted","Data":"1d86458011d93f6fe7285fb2f2cf484e62c79cf7a6171f9223b43b6413689879"} Nov 25 11:38:30 crc kubenswrapper[4706]: I1125 11:38:30.470521 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s95vl" event={"ID":"5d240203-a986-4b37-9849-10a0b29d7534","Type":"ContainerStarted","Data":"b9d432896e7710521655b785fb189d87e2887f853f34910d33d3acd8f4330433"} Nov 25 11:38:30 crc kubenswrapper[4706]: I1125 11:38:30.470573 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s95vl" event={"ID":"5d240203-a986-4b37-9849-10a0b29d7534","Type":"ContainerStarted","Data":"1e7b98256d235826771f34f6521c44a79762c1116e88ec2a19190f61f4ad04fa"} Nov 25 11:38:30 crc kubenswrapper[4706]: I1125 11:38:30.506635 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" podStartSLOduration=97.506614235 podStartE2EDuration="1m37.506614235s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:30.506082291 +0000 UTC m=+119.420639692" watchObservedRunningTime="2025-11-25 11:38:30.506614235 +0000 UTC m=+119.421171616" Nov 25 11:38:30 crc kubenswrapper[4706]: I1125 11:38:30.522700 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s95vl" podStartSLOduration=97.522673393 podStartE2EDuration="1m37.522673393s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:30.522063508 +0000 UTC m=+119.436620899" watchObservedRunningTime="2025-11-25 11:38:30.522673393 +0000 UTC m=+119.437230774" Nov 25 11:38:30 crc kubenswrapper[4706]: I1125 11:38:30.758939 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-l99rd"] Nov 25 11:38:30 crc kubenswrapper[4706]: I1125 11:38:30.759073 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:38:30 crc kubenswrapper[4706]: E1125 11:38:30.759186 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:38:31 crc kubenswrapper[4706]: E1125 11:38:31.915787 4706 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 25 11:38:31 crc kubenswrapper[4706]: I1125 11:38:31.921586 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:38:31 crc kubenswrapper[4706]: E1125 11:38:31.922943 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:38:31 crc kubenswrapper[4706]: I1125 11:38:31.923250 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:38:31 crc kubenswrapper[4706]: I1125 11:38:31.923325 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:38:31 crc kubenswrapper[4706]: E1125 11:38:31.923393 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:38:31 crc kubenswrapper[4706]: E1125 11:38:31.923554 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:38:32 crc kubenswrapper[4706]: E1125 11:38:32.021224 4706 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 11:38:32 crc kubenswrapper[4706]: I1125 11:38:32.922059 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:38:32 crc kubenswrapper[4706]: E1125 11:38:32.922324 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:38:33 crc kubenswrapper[4706]: I1125 11:38:33.921294 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:38:33 crc kubenswrapper[4706]: I1125 11:38:33.921405 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:38:33 crc kubenswrapper[4706]: I1125 11:38:33.921446 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:38:33 crc kubenswrapper[4706]: E1125 11:38:33.921541 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:38:33 crc kubenswrapper[4706]: E1125 11:38:33.921664 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:38:33 crc kubenswrapper[4706]: E1125 11:38:33.921803 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:38:34 crc kubenswrapper[4706]: I1125 11:38:34.921762 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:38:34 crc kubenswrapper[4706]: E1125 11:38:34.922288 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:38:35 crc kubenswrapper[4706]: I1125 11:38:35.922062 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:38:35 crc kubenswrapper[4706]: I1125 11:38:35.922181 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:38:35 crc kubenswrapper[4706]: I1125 11:38:35.922080 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:38:35 crc kubenswrapper[4706]: E1125 11:38:35.922372 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:38:35 crc kubenswrapper[4706]: E1125 11:38:35.922552 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:38:35 crc kubenswrapper[4706]: E1125 11:38:35.922693 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:38:36 crc kubenswrapper[4706]: I1125 11:38:36.921796 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:38:36 crc kubenswrapper[4706]: E1125 11:38:36.921989 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:38:37 crc kubenswrapper[4706]: E1125 11:38:37.023232 4706 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 11:38:37 crc kubenswrapper[4706]: I1125 11:38:37.922081 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:38:37 crc kubenswrapper[4706]: I1125 11:38:37.922150 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:38:37 crc kubenswrapper[4706]: E1125 11:38:37.922359 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:38:37 crc kubenswrapper[4706]: I1125 11:38:37.922210 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:38:37 crc kubenswrapper[4706]: E1125 11:38:37.922413 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:38:37 crc kubenswrapper[4706]: E1125 11:38:37.922597 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:38:38 crc kubenswrapper[4706]: I1125 11:38:38.922211 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:38:38 crc kubenswrapper[4706]: E1125 11:38:38.922471 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:38:39 crc kubenswrapper[4706]: I1125 11:38:39.922173 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:38:39 crc kubenswrapper[4706]: I1125 11:38:39.922270 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:38:39 crc kubenswrapper[4706]: I1125 11:38:39.922329 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:38:39 crc kubenswrapper[4706]: E1125 11:38:39.922401 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:38:39 crc kubenswrapper[4706]: E1125 11:38:39.922491 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:38:39 crc kubenswrapper[4706]: E1125 11:38:39.922600 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:38:40 crc kubenswrapper[4706]: I1125 11:38:40.921858 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:38:40 crc kubenswrapper[4706]: E1125 11:38:40.922138 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:38:41 crc kubenswrapper[4706]: I1125 11:38:41.921877 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:38:41 crc kubenswrapper[4706]: I1125 11:38:41.921944 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:38:41 crc kubenswrapper[4706]: E1125 11:38:41.924750 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:38:41 crc kubenswrapper[4706]: E1125 11:38:41.924944 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:38:41 crc kubenswrapper[4706]: I1125 11:38:41.925008 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:38:41 crc kubenswrapper[4706]: I1125 11:38:41.925469 4706 scope.go:117] "RemoveContainer" containerID="8831e77983548cfffd56f81ff9f25b90d70dfb71b47b545af370b0a813fa19a9" Nov 25 11:38:41 crc kubenswrapper[4706]: E1125 11:38:41.925459 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:38:42 crc kubenswrapper[4706]: E1125 11:38:42.023986 4706 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 11:38:42 crc kubenswrapper[4706]: I1125 11:38:42.523000 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s47nr_9912058e-28f5-4cec-9eeb-03e37e0dc5c1/kube-multus/1.log" Nov 25 11:38:42 crc kubenswrapper[4706]: I1125 11:38:42.523066 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-s47nr" event={"ID":"9912058e-28f5-4cec-9eeb-03e37e0dc5c1","Type":"ContainerStarted","Data":"198cfd82640633cc783bf590d5743bed75f93473c1ccd934ea506aef32ea6201"} Nov 25 11:38:42 crc kubenswrapper[4706]: I1125 11:38:42.921691 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:38:42 crc kubenswrapper[4706]: E1125 11:38:42.922101 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:38:43 crc kubenswrapper[4706]: I1125 11:38:43.922101 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:38:43 crc kubenswrapper[4706]: I1125 11:38:43.922222 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:38:43 crc kubenswrapper[4706]: I1125 11:38:43.922222 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:38:43 crc kubenswrapper[4706]: E1125 11:38:43.927378 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:38:43 crc kubenswrapper[4706]: E1125 11:38:43.927599 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:38:43 crc kubenswrapper[4706]: E1125 11:38:43.927832 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:38:44 crc kubenswrapper[4706]: I1125 11:38:44.922046 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:38:44 crc kubenswrapper[4706]: E1125 11:38:44.922235 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:38:45 crc kubenswrapper[4706]: I1125 11:38:45.472089 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:38:45 crc kubenswrapper[4706]: I1125 11:38:45.486739 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:38:45 crc kubenswrapper[4706]: I1125 11:38:45.922150 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:38:45 crc kubenswrapper[4706]: I1125 11:38:45.922179 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:38:45 crc kubenswrapper[4706]: I1125 11:38:45.922188 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:38:45 crc kubenswrapper[4706]: E1125 11:38:45.922338 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 11:38:45 crc kubenswrapper[4706]: E1125 11:38:45.922410 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 11:38:45 crc kubenswrapper[4706]: E1125 11:38:45.922464 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 11:38:46 crc kubenswrapper[4706]: I1125 11:38:46.921455 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:38:46 crc kubenswrapper[4706]: E1125 11:38:46.921925 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l99rd" podUID="14d69237-a4b7-43ea-ac81-f165eb532669" Nov 25 11:38:47 crc kubenswrapper[4706]: I1125 11:38:47.921417 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:38:47 crc kubenswrapper[4706]: I1125 11:38:47.921491 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:38:47 crc kubenswrapper[4706]: I1125 11:38:47.922098 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:38:47 crc kubenswrapper[4706]: I1125 11:38:47.923815 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 25 11:38:47 crc kubenswrapper[4706]: I1125 11:38:47.923928 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 25 11:38:47 crc kubenswrapper[4706]: I1125 11:38:47.924024 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 25 11:38:47 crc kubenswrapper[4706]: I1125 11:38:47.925637 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 25 11:38:48 crc kubenswrapper[4706]: I1125 11:38:48.921587 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:38:48 crc kubenswrapper[4706]: I1125 11:38:48.924148 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 25 11:38:48 crc kubenswrapper[4706]: I1125 11:38:48.924412 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 25 11:38:49 crc kubenswrapper[4706]: I1125 11:38:49.993006 4706 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.033322 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jsj27"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.034057 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:50 crc kubenswrapper[4706]: W1125 11:38:50.038783 4706 reflector.go:561] object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff": failed to list *v1.Secret: secrets "openshift-apiserver-sa-dockercfg-djjff" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Nov 25 11:38:50 crc kubenswrapper[4706]: E1125 11:38:50.038834 4706 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-djjff\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-sa-dockercfg-djjff\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 11:38:50 crc kubenswrapper[4706]: W1125 11:38:50.038907 4706 reflector.go:561] object-"openshift-apiserver"/"etcd-client": failed to list *v1.Secret: secrets "etcd-client" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Nov 25 11:38:50 crc kubenswrapper[4706]: E1125 11:38:50.038925 4706 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"etcd-client\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 11:38:50 crc kubenswrapper[4706]: W1125 11:38:50.039007 4706 reflector.go:561] object-"openshift-apiserver"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Nov 25 11:38:50 crc kubenswrapper[4706]: E1125 11:38:50.039022 4706 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 11:38:50 crc kubenswrapper[4706]: W1125 11:38:50.039058 4706 reflector.go:561] object-"openshift-apiserver"/"encryption-config-1": failed to list *v1.Secret: secrets "encryption-config-1" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Nov 25 11:38:50 crc kubenswrapper[4706]: E1125 11:38:50.039070 4706 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"encryption-config-1\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 11:38:50 crc kubenswrapper[4706]: W1125 11:38:50.039111 4706 reflector.go:561] object-"openshift-apiserver"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Nov 25 11:38:50 crc kubenswrapper[4706]: E1125 11:38:50.039128 4706 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 11:38:50 crc kubenswrapper[4706]: W1125 11:38:50.039185 4706 reflector.go:561] object-"openshift-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: configmaps "etcd-serving-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Nov 25 11:38:50 crc kubenswrapper[4706]: E1125 11:38:50.039201 4706 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"etcd-serving-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 11:38:50 crc kubenswrapper[4706]: W1125 11:38:50.039237 4706 reflector.go:561] object-"openshift-apiserver"/"image-import-ca": failed to list *v1.ConfigMap: configmaps "image-import-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Nov 25 11:38:50 crc kubenswrapper[4706]: E1125 11:38:50.039251 4706 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"image-import-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"image-import-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 11:38:50 crc kubenswrapper[4706]: W1125 11:38:50.039288 4706 reflector.go:561] object-"openshift-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Nov 25 11:38:50 crc kubenswrapper[4706]: E1125 11:38:50.039320 4706 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 11:38:50 crc kubenswrapper[4706]: W1125 11:38:50.039360 4706 reflector.go:561] object-"openshift-apiserver"/"audit-1": failed to list *v1.ConfigMap: configmaps "audit-1" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Nov 25 11:38:50 crc kubenswrapper[4706]: E1125 11:38:50.039376 4706 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"audit-1\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 11:38:50 crc kubenswrapper[4706]: W1125 11:38:50.039460 4706 reflector.go:561] object-"openshift-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Nov 25 11:38:50 crc kubenswrapper[4706]: E1125 11:38:50.039478 4706 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 11:38:50 crc kubenswrapper[4706]: W1125 11:38:50.039520 4706 reflector.go:561] object-"openshift-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Nov 25 11:38:50 crc kubenswrapper[4706]: E1125 11:38:50.039537 4706 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.039948 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-9z28x"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.040474 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-w6nqn"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.040628 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-9z28x" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.040810 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-qm76l"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.041167 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6nqn" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.041454 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zf4pd"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.041549 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-qm76l" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.041705 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zf4pd" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.042959 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8rnp5"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.043226 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8rnp5" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.045368 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.046685 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j7x2j"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.047202 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-8f48m"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.047465 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jq6ck"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.047760 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jq6ck" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.048565 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j7x2j" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.048802 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8f48m" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.050120 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.050654 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.050927 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.051129 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.051266 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.051470 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.051521 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.051685 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.051859 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.051942 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.052007 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.052049 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.051474 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.053044 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q7gsh"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.053616 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q7gsh" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.055392 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.055514 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.055564 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.079521 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.086493 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.086553 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.086613 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.086568 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.087118 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.087293 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.087399 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.087506 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.088040 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.088250 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.088558 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.088719 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.088923 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-jd66x"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.090283 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.103231 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jd66x" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.103962 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.104240 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.104393 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.104463 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.104534 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.104706 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-d9vjp"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.105390 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.105594 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.105676 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.105738 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d9vjp" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.105813 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.105877 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.106184 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-qlr24"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.106026 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.106054 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.106234 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.106712 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-encryption-config\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.106233 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.106770 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-qlr24" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.106260 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.106767 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cd4c256-91b7-4b76-a9d3-6927ea77e61e-config\") pod \"route-controller-manager-6576b87f9c-j7x2j\" (UID: \"8cd4c256-91b7-4b76-a9d3-6927ea77e61e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j7x2j" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.106850 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-image-import-ca\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.106874 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/028d4ff3-870d-4002-843f-5381587e28fc-trusted-ca-bundle\") pod \"console-f9d7485db-8f48m\" (UID: \"028d4ff3-870d-4002-843f-5381587e28fc\") " pod="openshift-console/console-f9d7485db-8f48m" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.106900 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c31bc178-49e3-4bb8-a6d0-ca9e27662b9a-client-ca\") pod \"controller-manager-879f6c89f-zf4pd\" (UID: \"c31bc178-49e3-4bb8-a6d0-ca9e27662b9a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zf4pd" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.106925 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cf8aff4-1c08-49a5-82c9-92ac18f0b46f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8rnp5\" (UID: \"9cf8aff4-1c08-49a5-82c9-92ac18f0b46f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8rnp5" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.106947 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cf8aff4-1c08-49a5-82c9-92ac18f0b46f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8rnp5\" (UID: \"9cf8aff4-1c08-49a5-82c9-92ac18f0b46f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8rnp5" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.106969 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfzxx\" (UniqueName: \"kubernetes.io/projected/9cf8aff4-1c08-49a5-82c9-92ac18f0b46f-kube-api-access-xfzxx\") pod \"openshift-apiserver-operator-796bbdcf4f-8rnp5\" (UID: \"9cf8aff4-1c08-49a5-82c9-92ac18f0b46f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8rnp5" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.106995 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/028d4ff3-870d-4002-843f-5381587e28fc-service-ca\") pod \"console-f9d7485db-8f48m\" (UID: \"028d4ff3-870d-4002-843f-5381587e28fc\") " pod="openshift-console/console-f9d7485db-8f48m" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.107018 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad44dafa-6c78-4773-881b-6f3adeb1a29b-config\") pod \"authentication-operator-69f744f599-qm76l\" (UID: \"ad44dafa-6c78-4773-881b-6f3adeb1a29b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qm76l" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.107040 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5qrs\" (UniqueName: \"kubernetes.io/projected/8cd4c256-91b7-4b76-a9d3-6927ea77e61e-kube-api-access-x5qrs\") pod \"route-controller-manager-6576b87f9c-j7x2j\" (UID: \"8cd4c256-91b7-4b76-a9d3-6927ea77e61e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j7x2j" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.107062 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/028d4ff3-870d-4002-843f-5381587e28fc-oauth-serving-cert\") pod \"console-f9d7485db-8f48m\" (UID: \"028d4ff3-870d-4002-843f-5381587e28fc\") " pod="openshift-console/console-f9d7485db-8f48m" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.107091 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-audit\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.107117 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvz6z\" (UniqueName: \"kubernetes.io/projected/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-kube-api-access-bvz6z\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.107151 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c31bc178-49e3-4bb8-a6d0-ca9e27662b9a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-zf4pd\" (UID: \"c31bc178-49e3-4bb8-a6d0-ca9e27662b9a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zf4pd" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.107173 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8cd4c256-91b7-4b76-a9d3-6927ea77e61e-client-ca\") pod \"route-controller-manager-6576b87f9c-j7x2j\" (UID: \"8cd4c256-91b7-4b76-a9d3-6927ea77e61e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j7x2j" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.107195 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.107205 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.107231 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad44dafa-6c78-4773-881b-6f3adeb1a29b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-qm76l\" (UID: \"ad44dafa-6c78-4773-881b-6f3adeb1a29b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qm76l" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.107254 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/028d4ff3-870d-4002-843f-5381587e28fc-console-oauth-config\") pod \"console-f9d7485db-8f48m\" (UID: \"028d4ff3-870d-4002-843f-5381587e28fc\") " pod="openshift-console/console-f9d7485db-8f48m" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.107277 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-serving-cert\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.107316 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cd4c256-91b7-4b76-a9d3-6927ea77e61e-serving-cert\") pod \"route-controller-manager-6576b87f9c-j7x2j\" (UID: \"8cd4c256-91b7-4b76-a9d3-6927ea77e61e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j7x2j" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.107347 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.107386 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c31bc178-49e3-4bb8-a6d0-ca9e27662b9a-config\") pod \"controller-manager-879f6c89f-zf4pd\" (UID: \"c31bc178-49e3-4bb8-a6d0-ca9e27662b9a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zf4pd" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.107478 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/028d4ff3-870d-4002-843f-5381587e28fc-console-serving-cert\") pod \"console-f9d7485db-8f48m\" (UID: \"028d4ff3-870d-4002-843f-5381587e28fc\") " pod="openshift-console/console-f9d7485db-8f48m" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.107509 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f31f7e75-5a0b-4519-bbe7-521544fa61c1-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-q7gsh\" (UID: \"f31f7e75-5a0b-4519-bbe7-521544fa61c1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q7gsh" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.107540 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhbdk\" (UniqueName: \"kubernetes.io/projected/f31f7e75-5a0b-4519-bbe7-521544fa61c1-kube-api-access-zhbdk\") pod \"cluster-samples-operator-665b6dd947-q7gsh\" (UID: \"f31f7e75-5a0b-4519-bbe7-521544fa61c1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q7gsh" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.107570 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.107568 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-config\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.107677 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-etcd-client\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.107713 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-node-pullsecrets\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.107745 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkj4s\" (UniqueName: \"kubernetes.io/projected/ad44dafa-6c78-4773-881b-6f3adeb1a29b-kube-api-access-jkj4s\") pod \"authentication-operator-69f744f599-qm76l\" (UID: \"ad44dafa-6c78-4773-881b-6f3adeb1a29b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qm76l" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.107765 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-audit-dir\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.107787 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg74s\" (UniqueName: \"kubernetes.io/projected/c31bc178-49e3-4bb8-a6d0-ca9e27662b9a-kube-api-access-sg74s\") pod \"controller-manager-879f6c89f-zf4pd\" (UID: \"c31bc178-49e3-4bb8-a6d0-ca9e27662b9a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zf4pd" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.107776 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.107890 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.107944 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.107812 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad44dafa-6c78-4773-881b-6f3adeb1a29b-service-ca-bundle\") pod \"authentication-operator-69f744f599-qm76l\" (UID: \"ad44dafa-6c78-4773-881b-6f3adeb1a29b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qm76l" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.107803 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.108029 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/028d4ff3-870d-4002-843f-5381587e28fc-console-config\") pod \"console-f9d7485db-8f48m\" (UID: \"028d4ff3-870d-4002-843f-5381587e28fc\") " pod="openshift-console/console-f9d7485db-8f48m" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.108051 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad44dafa-6c78-4773-881b-6f3adeb1a29b-serving-cert\") pod \"authentication-operator-69f744f599-qm76l\" (UID: \"ad44dafa-6c78-4773-881b-6f3adeb1a29b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qm76l" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.108070 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-etcd-serving-ca\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.108087 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c31bc178-49e3-4bb8-a6d0-ca9e27662b9a-serving-cert\") pod \"controller-manager-879f6c89f-zf4pd\" (UID: \"c31bc178-49e3-4bb8-a6d0-ca9e27662b9a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zf4pd" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.108095 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.112019 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svsw6"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.112624 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svsw6" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.113040 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ss2xd"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.113486 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.113850 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7qf2c"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.117986 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.118156 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.118469 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.119261 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.119457 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.119528 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.119661 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.119706 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.126892 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.127160 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.127367 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.128781 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jhptj"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.129392 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.129517 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mnv7h"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.130170 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fc942"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.130402 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jhptj" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.130538 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2hpv7"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.130812 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mnv7h" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.131578 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fc942" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.134523 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.135269 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.136451 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.136728 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.136980 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.137038 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.146588 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.149154 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.149369 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.149531 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.149685 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.150210 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.150234 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.151990 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-67c5m"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.152204 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-2hpv7" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.152903 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-99vrx"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.153646 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6hgvx"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.163611 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.164242 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.164476 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.164691 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.164877 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.165046 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.165209 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.168105 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-99vrx" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.168172 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-67c5m" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.168944 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-22mnp"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.170915 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6hgvx" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.172125 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-22mnp" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.175740 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.175993 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.176610 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.178781 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.178841 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-cs4td"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.179577 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.180073 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.204747 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.205840 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cs4td" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.207638 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hhh7q"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.208485 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hhh7q" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.208798 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-jg4ng"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.208801 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg74s\" (UniqueName: \"kubernetes.io/projected/c31bc178-49e3-4bb8-a6d0-ca9e27662b9a-kube-api-access-sg74s\") pod \"controller-manager-879f6c89f-zf4pd\" (UID: \"c31bc178-49e3-4bb8-a6d0-ca9e27662b9a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zf4pd" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.209016 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f6ce79ff-bc51-4375-bd97-7e6ba29f263d-encryption-config\") pod \"apiserver-7bbb656c7d-kg9rr\" (UID: \"f6ce79ff-bc51-4375-bd97-7e6ba29f263d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.209125 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad44dafa-6c78-4773-881b-6f3adeb1a29b-service-ca-bundle\") pod \"authentication-operator-69f744f599-qm76l\" (UID: \"ad44dafa-6c78-4773-881b-6f3adeb1a29b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qm76l" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.209251 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/028d4ff3-870d-4002-843f-5381587e28fc-console-config\") pod \"console-f9d7485db-8f48m\" (UID: \"028d4ff3-870d-4002-843f-5381587e28fc\") " pod="openshift-console/console-f9d7485db-8f48m" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.209377 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad44dafa-6c78-4773-881b-6f3adeb1a29b-serving-cert\") pod \"authentication-operator-69f744f599-qm76l\" (UID: \"ad44dafa-6c78-4773-881b-6f3adeb1a29b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qm76l" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.209483 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09d713da-8021-4bfa-b39d-bc3399593865-serving-cert\") pod \"openshift-config-operator-7777fb866f-w6nqn\" (UID: \"09d713da-8021-4bfa-b39d-bc3399593865\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6nqn" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.209610 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ab2dd029-844e-4783-8fda-bfab6a6d9243-images\") pod \"machine-api-operator-5694c8668f-9z28x\" (UID: \"ab2dd029-844e-4783-8fda-bfab6a6d9243\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9z28x" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.209724 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb17dbfb-8a35-405a-9f44-044252ee8eb4-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jq6ck\" (UID: \"bb17dbfb-8a35-405a-9f44-044252ee8eb4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jq6ck" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.209825 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab2dd029-844e-4783-8fda-bfab6a6d9243-config\") pod \"machine-api-operator-5694c8668f-9z28x\" (UID: \"ab2dd029-844e-4783-8fda-bfab6a6d9243\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9z28x" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.209928 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-etcd-serving-ca\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.210031 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c31bc178-49e3-4bb8-a6d0-ca9e27662b9a-serving-cert\") pod \"controller-manager-879f6c89f-zf4pd\" (UID: \"c31bc178-49e3-4bb8-a6d0-ca9e27662b9a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zf4pd" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.210139 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6ce79ff-bc51-4375-bd97-7e6ba29f263d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-kg9rr\" (UID: \"f6ce79ff-bc51-4375-bd97-7e6ba29f263d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.210242 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kgnx\" (UniqueName: \"kubernetes.io/projected/ab2dd029-844e-4783-8fda-bfab6a6d9243-kube-api-access-4kgnx\") pod \"machine-api-operator-5694c8668f-9z28x\" (UID: \"ab2dd029-844e-4783-8fda-bfab6a6d9243\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9z28x" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.210367 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f6ce79ff-bc51-4375-bd97-7e6ba29f263d-audit-policies\") pod \"apiserver-7bbb656c7d-kg9rr\" (UID: \"f6ce79ff-bc51-4375-bd97-7e6ba29f263d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.210493 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-encryption-config\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.212380 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htmv8\" (UniqueName: \"kubernetes.io/projected/bb17dbfb-8a35-405a-9f44-044252ee8eb4-kube-api-access-htmv8\") pod \"openshift-controller-manager-operator-756b6f6bc6-jq6ck\" (UID: \"bb17dbfb-8a35-405a-9f44-044252ee8eb4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jq6ck" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.212547 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cd4c256-91b7-4b76-a9d3-6927ea77e61e-config\") pod \"route-controller-manager-6576b87f9c-j7x2j\" (UID: \"8cd4c256-91b7-4b76-a9d3-6927ea77e61e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j7x2j" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.212680 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-image-import-ca\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.212796 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/028d4ff3-870d-4002-843f-5381587e28fc-trusted-ca-bundle\") pod \"console-f9d7485db-8f48m\" (UID: \"028d4ff3-870d-4002-843f-5381587e28fc\") " pod="openshift-console/console-f9d7485db-8f48m" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.212914 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c31bc178-49e3-4bb8-a6d0-ca9e27662b9a-client-ca\") pod \"controller-manager-879f6c89f-zf4pd\" (UID: \"c31bc178-49e3-4bb8-a6d0-ca9e27662b9a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zf4pd" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.213051 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cf8aff4-1c08-49a5-82c9-92ac18f0b46f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8rnp5\" (UID: \"9cf8aff4-1c08-49a5-82c9-92ac18f0b46f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8rnp5" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.213165 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cf8aff4-1c08-49a5-82c9-92ac18f0b46f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8rnp5\" (UID: \"9cf8aff4-1c08-49a5-82c9-92ac18f0b46f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8rnp5" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.213280 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfzxx\" (UniqueName: \"kubernetes.io/projected/9cf8aff4-1c08-49a5-82c9-92ac18f0b46f-kube-api-access-xfzxx\") pod \"openshift-apiserver-operator-796bbdcf4f-8rnp5\" (UID: \"9cf8aff4-1c08-49a5-82c9-92ac18f0b46f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8rnp5" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.213420 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/028d4ff3-870d-4002-843f-5381587e28fc-service-ca\") pod \"console-f9d7485db-8f48m\" (UID: \"028d4ff3-870d-4002-843f-5381587e28fc\") " pod="openshift-console/console-f9d7485db-8f48m" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.213542 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad44dafa-6c78-4773-881b-6f3adeb1a29b-config\") pod \"authentication-operator-69f744f599-qm76l\" (UID: \"ad44dafa-6c78-4773-881b-6f3adeb1a29b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qm76l" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.213651 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5qrs\" (UniqueName: \"kubernetes.io/projected/8cd4c256-91b7-4b76-a9d3-6927ea77e61e-kube-api-access-x5qrs\") pod \"route-controller-manager-6576b87f9c-j7x2j\" (UID: \"8cd4c256-91b7-4b76-a9d3-6927ea77e61e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j7x2j" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.213764 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/028d4ff3-870d-4002-843f-5381587e28fc-oauth-serving-cert\") pod \"console-f9d7485db-8f48m\" (UID: \"028d4ff3-870d-4002-843f-5381587e28fc\") " pod="openshift-console/console-f9d7485db-8f48m" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.213877 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f6ce79ff-bc51-4375-bd97-7e6ba29f263d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-kg9rr\" (UID: \"f6ce79ff-bc51-4375-bd97-7e6ba29f263d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.214128 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-audit\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.214257 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6ce79ff-bc51-4375-bd97-7e6ba29f263d-serving-cert\") pod \"apiserver-7bbb656c7d-kg9rr\" (UID: \"f6ce79ff-bc51-4375-bd97-7e6ba29f263d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.214424 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvz6z\" (UniqueName: \"kubernetes.io/projected/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-kube-api-access-bvz6z\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.214533 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4bzt\" (UniqueName: \"kubernetes.io/projected/f6ce79ff-bc51-4375-bd97-7e6ba29f263d-kube-api-access-f4bzt\") pod \"apiserver-7bbb656c7d-kg9rr\" (UID: \"f6ce79ff-bc51-4375-bd97-7e6ba29f263d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.214664 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c31bc178-49e3-4bb8-a6d0-ca9e27662b9a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-zf4pd\" (UID: \"c31bc178-49e3-4bb8-a6d0-ca9e27662b9a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zf4pd" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.214785 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8cd4c256-91b7-4b76-a9d3-6927ea77e61e-client-ca\") pod \"route-controller-manager-6576b87f9c-j7x2j\" (UID: \"8cd4c256-91b7-4b76-a9d3-6927ea77e61e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j7x2j" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.214906 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.215113 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad44dafa-6c78-4773-881b-6f3adeb1a29b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-qm76l\" (UID: \"ad44dafa-6c78-4773-881b-6f3adeb1a29b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qm76l" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.215231 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/028d4ff3-870d-4002-843f-5381587e28fc-console-oauth-config\") pod \"console-f9d7485db-8f48m\" (UID: \"028d4ff3-870d-4002-843f-5381587e28fc\") " pod="openshift-console/console-f9d7485db-8f48m" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.215362 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f6ce79ff-bc51-4375-bd97-7e6ba29f263d-audit-dir\") pod \"apiserver-7bbb656c7d-kg9rr\" (UID: \"f6ce79ff-bc51-4375-bd97-7e6ba29f263d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.215539 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-serving-cert\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.215749 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cd4c256-91b7-4b76-a9d3-6927ea77e61e-serving-cert\") pod \"route-controller-manager-6576b87f9c-j7x2j\" (UID: \"8cd4c256-91b7-4b76-a9d3-6927ea77e61e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j7x2j" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.209440 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jg4ng" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.211558 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad44dafa-6c78-4773-881b-6f3adeb1a29b-service-ca-bundle\") pod \"authentication-operator-69f744f599-qm76l\" (UID: \"ad44dafa-6c78-4773-881b-6f3adeb1a29b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qm76l" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.215764 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad44dafa-6c78-4773-881b-6f3adeb1a29b-config\") pod \"authentication-operator-69f744f599-qm76l\" (UID: \"ad44dafa-6c78-4773-881b-6f3adeb1a29b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qm76l" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.215815 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cd4c256-91b7-4b76-a9d3-6927ea77e61e-config\") pod \"route-controller-manager-6576b87f9c-j7x2j\" (UID: \"8cd4c256-91b7-4b76-a9d3-6927ea77e61e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j7x2j" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.209677 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-tf2kg"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.215596 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cf8aff4-1c08-49a5-82c9-92ac18f0b46f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8rnp5\" (UID: \"9cf8aff4-1c08-49a5-82c9-92ac18f0b46f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8rnp5" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.215264 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/028d4ff3-870d-4002-843f-5381587e28fc-oauth-serving-cert\") pod \"console-f9d7485db-8f48m\" (UID: \"028d4ff3-870d-4002-843f-5381587e28fc\") " pod="openshift-console/console-f9d7485db-8f48m" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.217316 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fh2jc"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.217956 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-fh2jc" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.215876 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8j2l\" (UniqueName: \"kubernetes.io/projected/028d4ff3-870d-4002-843f-5381587e28fc-kube-api-access-h8j2l\") pod \"console-f9d7485db-8f48m\" (UID: \"028d4ff3-870d-4002-843f-5381587e28fc\") " pod="openshift-console/console-f9d7485db-8f48m" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.218173 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/09d713da-8021-4bfa-b39d-bc3399593865-available-featuregates\") pod \"openshift-config-operator-7777fb866f-w6nqn\" (UID: \"09d713da-8021-4bfa-b39d-bc3399593865\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6nqn" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.218206 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c31bc178-49e3-4bb8-a6d0-ca9e27662b9a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-zf4pd\" (UID: \"c31bc178-49e3-4bb8-a6d0-ca9e27662b9a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zf4pd" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.218282 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tf2kg" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.218482 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/028d4ff3-870d-4002-843f-5381587e28fc-service-ca\") pod \"console-f9d7485db-8f48m\" (UID: \"028d4ff3-870d-4002-843f-5381587e28fc\") " pod="openshift-console/console-f9d7485db-8f48m" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.218545 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8cd4c256-91b7-4b76-a9d3-6927ea77e61e-client-ca\") pod \"route-controller-manager-6576b87f9c-j7x2j\" (UID: \"8cd4c256-91b7-4b76-a9d3-6927ea77e61e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j7x2j" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.210784 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.212073 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.218806 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad44dafa-6c78-4773-881b-6f3adeb1a29b-serving-cert\") pod \"authentication-operator-69f744f599-qm76l\" (UID: \"ad44dafa-6c78-4773-881b-6f3adeb1a29b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qm76l" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.211857 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/028d4ff3-870d-4002-843f-5381587e28fc-console-config\") pod \"console-f9d7485db-8f48m\" (UID: \"028d4ff3-870d-4002-843f-5381587e28fc\") " pod="openshift-console/console-f9d7485db-8f48m" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.214511 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.218213 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c31bc178-49e3-4bb8-a6d0-ca9e27662b9a-config\") pod \"controller-manager-879f6c89f-zf4pd\" (UID: \"c31bc178-49e3-4bb8-a6d0-ca9e27662b9a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zf4pd" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.215765 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.217251 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.219410 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c31bc178-49e3-4bb8-a6d0-ca9e27662b9a-client-ca\") pod \"controller-manager-879f6c89f-zf4pd\" (UID: \"c31bc178-49e3-4bb8-a6d0-ca9e27662b9a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zf4pd" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.219467 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/028d4ff3-870d-4002-843f-5381587e28fc-console-serving-cert\") pod \"console-f9d7485db-8f48m\" (UID: \"028d4ff3-870d-4002-843f-5381587e28fc\") " pod="openshift-console/console-f9d7485db-8f48m" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.219704 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ab2dd029-844e-4783-8fda-bfab6a6d9243-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-9z28x\" (UID: \"ab2dd029-844e-4783-8fda-bfab6a6d9243\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9z28x" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.219763 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f31f7e75-5a0b-4519-bbe7-521544fa61c1-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-q7gsh\" (UID: \"f31f7e75-5a0b-4519-bbe7-521544fa61c1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q7gsh" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.219801 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f6ce79ff-bc51-4375-bd97-7e6ba29f263d-etcd-client\") pod \"apiserver-7bbb656c7d-kg9rr\" (UID: \"f6ce79ff-bc51-4375-bd97-7e6ba29f263d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.219836 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhbdk\" (UniqueName: \"kubernetes.io/projected/f31f7e75-5a0b-4519-bbe7-521544fa61c1-kube-api-access-zhbdk\") pod \"cluster-samples-operator-665b6dd947-q7gsh\" (UID: \"f31f7e75-5a0b-4519-bbe7-521544fa61c1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q7gsh" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.219858 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad44dafa-6c78-4773-881b-6f3adeb1a29b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-qm76l\" (UID: \"ad44dafa-6c78-4773-881b-6f3adeb1a29b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qm76l" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.219870 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-config\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.219899 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmmps\" (UniqueName: \"kubernetes.io/projected/09d713da-8021-4bfa-b39d-bc3399593865-kube-api-access-kmmps\") pod \"openshift-config-operator-7777fb866f-w6nqn\" (UID: \"09d713da-8021-4bfa-b39d-bc3399593865\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6nqn" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.220234 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-etcd-client\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.220317 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb17dbfb-8a35-405a-9f44-044252ee8eb4-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jq6ck\" (UID: \"bb17dbfb-8a35-405a-9f44-044252ee8eb4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jq6ck" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.220353 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c27s\" (UniqueName: \"kubernetes.io/projected/bf1352d3-1ee8-4c51-8f45-b9fd8354fd07-kube-api-access-7c27s\") pod \"downloads-7954f5f757-jd66x\" (UID: \"bf1352d3-1ee8-4c51-8f45-b9fd8354fd07\") " pod="openshift-console/downloads-7954f5f757-jd66x" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.220415 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-node-pullsecrets\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.220474 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkj4s\" (UniqueName: \"kubernetes.io/projected/ad44dafa-6c78-4773-881b-6f3adeb1a29b-kube-api-access-jkj4s\") pod \"authentication-operator-69f744f599-qm76l\" (UID: \"ad44dafa-6c78-4773-881b-6f3adeb1a29b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qm76l" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.220533 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-audit-dir\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.220630 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-audit-dir\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.220646 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-node-pullsecrets\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.221875 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c31bc178-49e3-4bb8-a6d0-ca9e27662b9a-config\") pod \"controller-manager-879f6c89f-zf4pd\" (UID: \"c31bc178-49e3-4bb8-a6d0-ca9e27662b9a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zf4pd" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.222807 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/028d4ff3-870d-4002-843f-5381587e28fc-console-oauth-config\") pod \"console-f9d7485db-8f48m\" (UID: \"028d4ff3-870d-4002-843f-5381587e28fc\") " pod="openshift-console/console-f9d7485db-8f48m" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.223221 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/028d4ff3-870d-4002-843f-5381587e28fc-console-serving-cert\") pod \"console-f9d7485db-8f48m\" (UID: \"028d4ff3-870d-4002-843f-5381587e28fc\") " pod="openshift-console/console-f9d7485db-8f48m" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.225220 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/028d4ff3-870d-4002-843f-5381587e28fc-trusted-ca-bundle\") pod \"console-f9d7485db-8f48m\" (UID: \"028d4ff3-870d-4002-843f-5381587e28fc\") " pod="openshift-console/console-f9d7485db-8f48m" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.227013 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cf8aff4-1c08-49a5-82c9-92ac18f0b46f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8rnp5\" (UID: \"9cf8aff4-1c08-49a5-82c9-92ac18f0b46f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8rnp5" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.227562 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cd4c256-91b7-4b76-a9d3-6927ea77e61e-serving-cert\") pod \"route-controller-manager-6576b87f9c-j7x2j\" (UID: \"8cd4c256-91b7-4b76-a9d3-6927ea77e61e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j7x2j" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.228280 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zn9dk"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.229378 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zn9dk" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.229862 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rs94g"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.230094 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c31bc178-49e3-4bb8-a6d0-ca9e27662b9a-serving-cert\") pod \"controller-manager-879f6c89f-zf4pd\" (UID: \"c31bc178-49e3-4bb8-a6d0-ca9e27662b9a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zf4pd" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.230405 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f31f7e75-5a0b-4519-bbe7-521544fa61c1-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-q7gsh\" (UID: \"f31f7e75-5a0b-4519-bbe7-521544fa61c1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q7gsh" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.231208 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-nqt58"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.233931 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9mkm"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.233988 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.232784 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rs94g" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.234267 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nqt58" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.236260 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bthtj"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.236448 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.237029 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bthtj" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.237137 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9mkm" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.237218 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7b2m"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.238085 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7b2m" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.238607 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vpgtz"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.239595 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-vpgtz" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.240567 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-cs4td"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.241894 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zf4pd"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.243073 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jsj27"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.244749 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-9z28x"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.245796 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401170-s4f7r"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.246946 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-w6nqn"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.247052 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401170-s4f7r" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.248735 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-99vrx"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.254587 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-qm76l"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.256371 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jhptj"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.256833 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.260422 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tgngn"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.265052 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-tgngn" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.266223 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-jg4ng"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.268672 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-qlr24"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.270094 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-8f48m"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.272016 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jq6ck"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.277601 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7qf2c"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.278433 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.280193 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ss2xd"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.281743 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j7x2j"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.282998 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8rnp5"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.287518 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.288644 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q7gsh"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.291502 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jd66x"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.292992 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hhh7q"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.294386 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svsw6"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.295648 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fc942"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.297132 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fh2jc"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.298981 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bthtj"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.303735 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mnv7h"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.303948 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-67c5m"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.306384 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.306475 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7b2m"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.307962 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2hpv7"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.309333 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-tf2kg"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.311388 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6hgvx"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.311742 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vpgtz"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.313401 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-q466t"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.314334 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-q466t" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.316242 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-nqt58"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.317061 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.318564 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-446sw"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.321071 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4bzt\" (UniqueName: \"kubernetes.io/projected/f6ce79ff-bc51-4375-bd97-7e6ba29f263d-kube-api-access-f4bzt\") pod \"apiserver-7bbb656c7d-kg9rr\" (UID: \"f6ce79ff-bc51-4375-bd97-7e6ba29f263d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.321133 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f6ce79ff-bc51-4375-bd97-7e6ba29f263d-audit-dir\") pod \"apiserver-7bbb656c7d-kg9rr\" (UID: \"f6ce79ff-bc51-4375-bd97-7e6ba29f263d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.321171 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8j2l\" (UniqueName: \"kubernetes.io/projected/028d4ff3-870d-4002-843f-5381587e28fc-kube-api-access-h8j2l\") pod \"console-f9d7485db-8f48m\" (UID: \"028d4ff3-870d-4002-843f-5381587e28fc\") " pod="openshift-console/console-f9d7485db-8f48m" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.321196 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/09d713da-8021-4bfa-b39d-bc3399593865-available-featuregates\") pod \"openshift-config-operator-7777fb866f-w6nqn\" (UID: \"09d713da-8021-4bfa-b39d-bc3399593865\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6nqn" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.321221 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ab2dd029-844e-4783-8fda-bfab6a6d9243-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-9z28x\" (UID: \"ab2dd029-844e-4783-8fda-bfab6a6d9243\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9z28x" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.321253 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/49757df3-88b5-4706-8010-139ffb01f41a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-svsw6\" (UID: \"49757df3-88b5-4706-8010-139ffb01f41a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svsw6" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.321282 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f6ce79ff-bc51-4375-bd97-7e6ba29f263d-etcd-client\") pod \"apiserver-7bbb656c7d-kg9rr\" (UID: \"f6ce79ff-bc51-4375-bd97-7e6ba29f263d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.321339 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49757df3-88b5-4706-8010-139ffb01f41a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-svsw6\" (UID: \"49757df3-88b5-4706-8010-139ffb01f41a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svsw6" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.321369 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmmps\" (UniqueName: \"kubernetes.io/projected/09d713da-8021-4bfa-b39d-bc3399593865-kube-api-access-kmmps\") pod \"openshift-config-operator-7777fb866f-w6nqn\" (UID: \"09d713da-8021-4bfa-b39d-bc3399593865\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6nqn" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.321401 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb17dbfb-8a35-405a-9f44-044252ee8eb4-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jq6ck\" (UID: \"bb17dbfb-8a35-405a-9f44-044252ee8eb4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jq6ck" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.321425 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c27s\" (UniqueName: \"kubernetes.io/projected/bf1352d3-1ee8-4c51-8f45-b9fd8354fd07-kube-api-access-7c27s\") pod \"downloads-7954f5f757-jd66x\" (UID: \"bf1352d3-1ee8-4c51-8f45-b9fd8354fd07\") " pod="openshift-console/downloads-7954f5f757-jd66x" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.321489 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f6ce79ff-bc51-4375-bd97-7e6ba29f263d-encryption-config\") pod \"apiserver-7bbb656c7d-kg9rr\" (UID: \"f6ce79ff-bc51-4375-bd97-7e6ba29f263d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.321523 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09d713da-8021-4bfa-b39d-bc3399593865-serving-cert\") pod \"openshift-config-operator-7777fb866f-w6nqn\" (UID: \"09d713da-8021-4bfa-b39d-bc3399593865\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6nqn" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.321547 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ab2dd029-844e-4783-8fda-bfab6a6d9243-images\") pod \"machine-api-operator-5694c8668f-9z28x\" (UID: \"ab2dd029-844e-4783-8fda-bfab6a6d9243\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9z28x" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.321571 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/49757df3-88b5-4706-8010-139ffb01f41a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-svsw6\" (UID: \"49757df3-88b5-4706-8010-139ffb01f41a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svsw6" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.321601 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb17dbfb-8a35-405a-9f44-044252ee8eb4-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jq6ck\" (UID: \"bb17dbfb-8a35-405a-9f44-044252ee8eb4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jq6ck" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.321626 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab2dd029-844e-4783-8fda-bfab6a6d9243-config\") pod \"machine-api-operator-5694c8668f-9z28x\" (UID: \"ab2dd029-844e-4783-8fda-bfab6a6d9243\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9z28x" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.321662 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6ce79ff-bc51-4375-bd97-7e6ba29f263d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-kg9rr\" (UID: \"f6ce79ff-bc51-4375-bd97-7e6ba29f263d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.321685 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kgnx\" (UniqueName: \"kubernetes.io/projected/ab2dd029-844e-4783-8fda-bfab6a6d9243-kube-api-access-4kgnx\") pod \"machine-api-operator-5694c8668f-9z28x\" (UID: \"ab2dd029-844e-4783-8fda-bfab6a6d9243\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9z28x" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.321711 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f6ce79ff-bc51-4375-bd97-7e6ba29f263d-audit-policies\") pod \"apiserver-7bbb656c7d-kg9rr\" (UID: \"f6ce79ff-bc51-4375-bd97-7e6ba29f263d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.321745 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htmv8\" (UniqueName: \"kubernetes.io/projected/bb17dbfb-8a35-405a-9f44-044252ee8eb4-kube-api-access-htmv8\") pod \"openshift-controller-manager-operator-756b6f6bc6-jq6ck\" (UID: \"bb17dbfb-8a35-405a-9f44-044252ee8eb4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jq6ck" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.321795 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp7fg\" (UniqueName: \"kubernetes.io/projected/49757df3-88b5-4706-8010-139ffb01f41a-kube-api-access-cp7fg\") pod \"cluster-image-registry-operator-dc59b4c8b-svsw6\" (UID: \"49757df3-88b5-4706-8010-139ffb01f41a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svsw6" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.321842 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f6ce79ff-bc51-4375-bd97-7e6ba29f263d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-kg9rr\" (UID: \"f6ce79ff-bc51-4375-bd97-7e6ba29f263d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.321875 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6ce79ff-bc51-4375-bd97-7e6ba29f263d-serving-cert\") pod \"apiserver-7bbb656c7d-kg9rr\" (UID: \"f6ce79ff-bc51-4375-bd97-7e6ba29f263d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.323134 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-446sw" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.323867 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/09d713da-8021-4bfa-b39d-bc3399593865-available-featuregates\") pod \"openshift-config-operator-7777fb866f-w6nqn\" (UID: \"09d713da-8021-4bfa-b39d-bc3399593865\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6nqn" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.324447 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6ce79ff-bc51-4375-bd97-7e6ba29f263d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-kg9rr\" (UID: \"f6ce79ff-bc51-4375-bd97-7e6ba29f263d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.324482 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f6ce79ff-bc51-4375-bd97-7e6ba29f263d-audit-dir\") pod \"apiserver-7bbb656c7d-kg9rr\" (UID: \"f6ce79ff-bc51-4375-bd97-7e6ba29f263d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.324777 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f6ce79ff-bc51-4375-bd97-7e6ba29f263d-audit-policies\") pod \"apiserver-7bbb656c7d-kg9rr\" (UID: \"f6ce79ff-bc51-4375-bd97-7e6ba29f263d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.325464 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ab2dd029-844e-4783-8fda-bfab6a6d9243-images\") pod \"machine-api-operator-5694c8668f-9z28x\" (UID: \"ab2dd029-844e-4783-8fda-bfab6a6d9243\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9z28x" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.325477 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb17dbfb-8a35-405a-9f44-044252ee8eb4-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jq6ck\" (UID: \"bb17dbfb-8a35-405a-9f44-044252ee8eb4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jq6ck" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.325611 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f6ce79ff-bc51-4375-bd97-7e6ba29f263d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-kg9rr\" (UID: \"f6ce79ff-bc51-4375-bd97-7e6ba29f263d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.325815 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f6ce79ff-bc51-4375-bd97-7e6ba29f263d-encryption-config\") pod \"apiserver-7bbb656c7d-kg9rr\" (UID: \"f6ce79ff-bc51-4375-bd97-7e6ba29f263d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.325816 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab2dd029-844e-4783-8fda-bfab6a6d9243-config\") pod \"machine-api-operator-5694c8668f-9z28x\" (UID: \"ab2dd029-844e-4783-8fda-bfab6a6d9243\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9z28x" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.326490 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ab2dd029-844e-4783-8fda-bfab6a6d9243-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-9z28x\" (UID: \"ab2dd029-844e-4783-8fda-bfab6a6d9243\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9z28x" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.326892 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f6ce79ff-bc51-4375-bd97-7e6ba29f263d-etcd-client\") pod \"apiserver-7bbb656c7d-kg9rr\" (UID: \"f6ce79ff-bc51-4375-bd97-7e6ba29f263d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.327405 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6ce79ff-bc51-4375-bd97-7e6ba29f263d-serving-cert\") pod \"apiserver-7bbb656c7d-kg9rr\" (UID: \"f6ce79ff-bc51-4375-bd97-7e6ba29f263d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.327459 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rs94g"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.329110 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401170-s4f7r"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.330868 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb17dbfb-8a35-405a-9f44-044252ee8eb4-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jq6ck\" (UID: \"bb17dbfb-8a35-405a-9f44-044252ee8eb4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jq6ck" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.331470 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09d713da-8021-4bfa-b39d-bc3399593865-serving-cert\") pod \"openshift-config-operator-7777fb866f-w6nqn\" (UID: \"09d713da-8021-4bfa-b39d-bc3399593865\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6nqn" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.333199 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9mkm"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.334719 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zn9dk"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.336490 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tgngn"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.337260 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.337837 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-q466t"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.339014 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-wswtg"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.340483 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wswtg"] Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.340660 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wswtg" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.356592 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.377064 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.397947 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.417105 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.422728 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/49757df3-88b5-4706-8010-139ffb01f41a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-svsw6\" (UID: \"49757df3-88b5-4706-8010-139ffb01f41a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svsw6" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.422811 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp7fg\" (UniqueName: \"kubernetes.io/projected/49757df3-88b5-4706-8010-139ffb01f41a-kube-api-access-cp7fg\") pod \"cluster-image-registry-operator-dc59b4c8b-svsw6\" (UID: \"49757df3-88b5-4706-8010-139ffb01f41a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svsw6" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.422918 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/49757df3-88b5-4706-8010-139ffb01f41a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-svsw6\" (UID: \"49757df3-88b5-4706-8010-139ffb01f41a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svsw6" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.422953 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49757df3-88b5-4706-8010-139ffb01f41a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-svsw6\" (UID: \"49757df3-88b5-4706-8010-139ffb01f41a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svsw6" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.426435 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/49757df3-88b5-4706-8010-139ffb01f41a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-svsw6\" (UID: \"49757df3-88b5-4706-8010-139ffb01f41a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svsw6" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.437739 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49757df3-88b5-4706-8010-139ffb01f41a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-svsw6\" (UID: \"49757df3-88b5-4706-8010-139ffb01f41a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svsw6" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.438070 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.457357 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.476872 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.496325 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.516959 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.536940 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.558950 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.577332 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.596227 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.616884 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.636777 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.676687 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.697038 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.717284 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.737905 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.757390 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.777969 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.796928 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.817492 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.837844 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.857090 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.877514 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.897053 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.916751 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.937403 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.957364 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.976593 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 25 11:38:50 crc kubenswrapper[4706]: I1125 11:38:50.997438 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.017619 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.037464 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.057359 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.077785 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.097071 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.117575 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.137461 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.171376 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg74s\" (UniqueName: \"kubernetes.io/projected/c31bc178-49e3-4bb8-a6d0-ca9e27662b9a-kube-api-access-sg74s\") pod \"controller-manager-879f6c89f-zf4pd\" (UID: \"c31bc178-49e3-4bb8-a6d0-ca9e27662b9a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zf4pd" Nov 25 11:38:51 crc kubenswrapper[4706]: E1125 11:38:51.211012 4706 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Nov 25 11:38:51 crc kubenswrapper[4706]: E1125 11:38:51.211066 4706 secret.go:188] Couldn't get secret openshift-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Nov 25 11:38:51 crc kubenswrapper[4706]: E1125 11:38:51.211153 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-encryption-config podName:d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a nodeName:}" failed. No retries permitted until 2025-11-25 11:38:51.711118884 +0000 UTC m=+140.625676265 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-encryption-config") pod "apiserver-76f77b778f-jsj27" (UID: "d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a") : failed to sync secret cache: timed out waiting for the condition Nov 25 11:38:51 crc kubenswrapper[4706]: E1125 11:38:51.211183 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-etcd-serving-ca podName:d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a nodeName:}" failed. No retries permitted until 2025-11-25 11:38:51.711170636 +0000 UTC m=+140.625728017 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-etcd-serving-ca") pod "apiserver-76f77b778f-jsj27" (UID: "d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a") : failed to sync configmap cache: timed out waiting for the condition Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.214902 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfzxx\" (UniqueName: \"kubernetes.io/projected/9cf8aff4-1c08-49a5-82c9-92ac18f0b46f-kube-api-access-xfzxx\") pod \"openshift-apiserver-operator-796bbdcf4f-8rnp5\" (UID: \"9cf8aff4-1c08-49a5-82c9-92ac18f0b46f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8rnp5" Nov 25 11:38:51 crc kubenswrapper[4706]: E1125 11:38:51.215747 4706 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: failed to sync configmap cache: timed out waiting for the condition Nov 25 11:38:51 crc kubenswrapper[4706]: E1125 11:38:51.215851 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-image-import-ca podName:d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a nodeName:}" failed. No retries permitted until 2025-11-25 11:38:51.715830742 +0000 UTC m=+140.630388333 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-image-import-ca") pod "apiserver-76f77b778f-jsj27" (UID: "d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a") : failed to sync configmap cache: timed out waiting for the condition Nov 25 11:38:51 crc kubenswrapper[4706]: E1125 11:38:51.217226 4706 secret.go:188] Couldn't get secret openshift-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Nov 25 11:38:51 crc kubenswrapper[4706]: E1125 11:38:51.217319 4706 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Nov 25 11:38:51 crc kubenswrapper[4706]: E1125 11:38:51.217367 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-serving-cert podName:d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a nodeName:}" failed. No retries permitted until 2025-11-25 11:38:51.717342763 +0000 UTC m=+140.631900144 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-serving-cert") pod "apiserver-76f77b778f-jsj27" (UID: "d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a") : failed to sync secret cache: timed out waiting for the condition Nov 25 11:38:51 crc kubenswrapper[4706]: E1125 11:38:51.217401 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-audit podName:d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a nodeName:}" failed. No retries permitted until 2025-11-25 11:38:51.717376644 +0000 UTC m=+140.631934205 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-audit") pod "apiserver-76f77b778f-jsj27" (UID: "d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a") : failed to sync configmap cache: timed out waiting for the condition Nov 25 11:38:51 crc kubenswrapper[4706]: E1125 11:38:51.218381 4706 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Nov 25 11:38:51 crc kubenswrapper[4706]: E1125 11:38:51.218444 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-trusted-ca-bundle podName:d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a nodeName:}" failed. No retries permitted until 2025-11-25 11:38:51.718428263 +0000 UTC m=+140.632985834 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-trusted-ca-bundle") pod "apiserver-76f77b778f-jsj27" (UID: "d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a") : failed to sync configmap cache: timed out waiting for the condition Nov 25 11:38:51 crc kubenswrapper[4706]: E1125 11:38:51.220955 4706 secret.go:188] Couldn't get secret openshift-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Nov 25 11:38:51 crc kubenswrapper[4706]: E1125 11:38:51.221005 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-etcd-client podName:d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a nodeName:}" failed. No retries permitted until 2025-11-25 11:38:51.720994012 +0000 UTC m=+140.635551383 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-etcd-client") pod "apiserver-76f77b778f-jsj27" (UID: "d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a") : failed to sync secret cache: timed out waiting for the condition Nov 25 11:38:51 crc kubenswrapper[4706]: E1125 11:38:51.221040 4706 configmap.go:193] Couldn't get configMap openshift-apiserver/config: failed to sync configmap cache: timed out waiting for the condition Nov 25 11:38:51 crc kubenswrapper[4706]: E1125 11:38:51.221060 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-config podName:d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a nodeName:}" failed. No retries permitted until 2025-11-25 11:38:51.721054194 +0000 UTC m=+140.635611575 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-config") pod "apiserver-76f77b778f-jsj27" (UID: "d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a") : failed to sync configmap cache: timed out waiting for the condition Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.231257 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5qrs\" (UniqueName: \"kubernetes.io/projected/8cd4c256-91b7-4b76-a9d3-6927ea77e61e-kube-api-access-x5qrs\") pod \"route-controller-manager-6576b87f9c-j7x2j\" (UID: \"8cd4c256-91b7-4b76-a9d3-6927ea77e61e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j7x2j" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.234673 4706 request.go:700] Waited for 1.01831142s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.236524 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.257619 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.298095 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.316671 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.338000 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.357565 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.377127 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.387107 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zf4pd" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.397473 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.432439 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhbdk\" (UniqueName: \"kubernetes.io/projected/f31f7e75-5a0b-4519-bbe7-521544fa61c1-kube-api-access-zhbdk\") pod \"cluster-samples-operator-665b6dd947-q7gsh\" (UID: \"f31f7e75-5a0b-4519-bbe7-521544fa61c1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q7gsh" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.446851 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8rnp5" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.453088 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkj4s\" (UniqueName: \"kubernetes.io/projected/ad44dafa-6c78-4773-881b-6f3adeb1a29b-kube-api-access-jkj4s\") pod \"authentication-operator-69f744f599-qm76l\" (UID: \"ad44dafa-6c78-4773-881b-6f3adeb1a29b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qm76l" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.457094 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.473970 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j7x2j" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.477872 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.501379 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q7gsh" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.501541 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.523203 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.537081 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.556858 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.581415 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.597408 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.618658 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zf4pd"] Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.619506 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.637000 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.650474 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-qm76l" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.657054 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.677203 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.680008 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8rnp5"] Nov 25 11:38:51 crc kubenswrapper[4706]: W1125 11:38:51.693501 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9cf8aff4_1c08_49a5_82c9_92ac18f0b46f.slice/crio-3a35c4fdb3465a16fe1ac9ebfe84e922423455e5b6321a3083a653c8c07f194d WatchSource:0}: Error finding container 3a35c4fdb3465a16fe1ac9ebfe84e922423455e5b6321a3083a653c8c07f194d: Status 404 returned error can't find the container with id 3a35c4fdb3465a16fe1ac9ebfe84e922423455e5b6321a3083a653c8c07f194d Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.697184 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.697618 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j7x2j"] Nov 25 11:38:51 crc kubenswrapper[4706]: W1125 11:38:51.705730 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8cd4c256_91b7_4b76_a9d3_6927ea77e61e.slice/crio-ce3c60198e11b985d403328021a23d9ba4f0f30ea762a0582de78380240dc2eb WatchSource:0}: Error finding container ce3c60198e11b985d403328021a23d9ba4f0f30ea762a0582de78380240dc2eb: Status 404 returned error can't find the container with id ce3c60198e11b985d403328021a23d9ba4f0f30ea762a0582de78380240dc2eb Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.716292 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.737371 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.740118 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-config\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.740176 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-etcd-client\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.740263 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-etcd-serving-ca\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.740345 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-encryption-config\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.740390 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-image-import-ca\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.740451 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-audit\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.740503 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.740556 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-serving-cert\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.757702 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.762807 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q7gsh"] Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.778034 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.799289 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.817249 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.837023 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.844272 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-qm76l"] Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.857190 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 25 11:38:51 crc kubenswrapper[4706]: W1125 11:38:51.857589 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad44dafa_6c78_4773_881b_6f3adeb1a29b.slice/crio-abf5cff9e995e294fbf164f4705d4bda3f9eaf0c60108b05411a390404f8e13e WatchSource:0}: Error finding container abf5cff9e995e294fbf164f4705d4bda3f9eaf0c60108b05411a390404f8e13e: Status 404 returned error can't find the container with id abf5cff9e995e294fbf164f4705d4bda3f9eaf0c60108b05411a390404f8e13e Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.877350 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.897104 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.917924 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.938364 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.956988 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.977235 4706 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 25 11:38:51 crc kubenswrapper[4706]: I1125 11:38:51.998013 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.017091 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.036805 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.057134 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.077136 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.096756 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.117785 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.138150 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.176835 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmmps\" (UniqueName: \"kubernetes.io/projected/09d713da-8021-4bfa-b39d-bc3399593865-kube-api-access-kmmps\") pod \"openshift-config-operator-7777fb866f-w6nqn\" (UID: \"09d713da-8021-4bfa-b39d-bc3399593865\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6nqn" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.193463 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htmv8\" (UniqueName: \"kubernetes.io/projected/bb17dbfb-8a35-405a-9f44-044252ee8eb4-kube-api-access-htmv8\") pod \"openshift-controller-manager-operator-756b6f6bc6-jq6ck\" (UID: \"bb17dbfb-8a35-405a-9f44-044252ee8eb4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jq6ck" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.212263 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kgnx\" (UniqueName: \"kubernetes.io/projected/ab2dd029-844e-4783-8fda-bfab6a6d9243-kube-api-access-4kgnx\") pod \"machine-api-operator-5694c8668f-9z28x\" (UID: \"ab2dd029-844e-4783-8fda-bfab6a6d9243\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9z28x" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.226921 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6nqn" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.233865 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8j2l\" (UniqueName: \"kubernetes.io/projected/028d4ff3-870d-4002-843f-5381587e28fc-kube-api-access-h8j2l\") pod \"console-f9d7485db-8f48m\" (UID: \"028d4ff3-870d-4002-843f-5381587e28fc\") " pod="openshift-console/console-f9d7485db-8f48m" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.234918 4706 request.go:700] Waited for 1.910400912s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa/token Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.260880 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4bzt\" (UniqueName: \"kubernetes.io/projected/f6ce79ff-bc51-4375-bd97-7e6ba29f263d-kube-api-access-f4bzt\") pod \"apiserver-7bbb656c7d-kg9rr\" (UID: \"f6ce79ff-bc51-4375-bd97-7e6ba29f263d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.278256 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.279717 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c27s\" (UniqueName: \"kubernetes.io/projected/bf1352d3-1ee8-4c51-8f45-b9fd8354fd07-kube-api-access-7c27s\") pod \"downloads-7954f5f757-jd66x\" (UID: \"bf1352d3-1ee8-4c51-8f45-b9fd8354fd07\") " pod="openshift-console/downloads-7954f5f757-jd66x" Nov 25 11:38:52 crc kubenswrapper[4706]: E1125 11:38:52.290678 4706 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.297332 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.317121 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.355599 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp7fg\" (UniqueName: \"kubernetes.io/projected/49757df3-88b5-4706-8010-139ffb01f41a-kube-api-access-cp7fg\") pod \"cluster-image-registry-operator-dc59b4c8b-svsw6\" (UID: \"49757df3-88b5-4706-8010-139ffb01f41a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svsw6" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.363693 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jq6ck" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.373282 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/49757df3-88b5-4706-8010-139ffb01f41a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-svsw6\" (UID: \"49757df3-88b5-4706-8010-139ffb01f41a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svsw6" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.394120 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8f48m" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.398150 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.404767 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-encryption-config\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.410733 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.419975 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jd66x" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.423906 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.432455 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.438493 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.443066 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-w6nqn"] Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.443519 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-audit\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:52 crc kubenswrapper[4706]: W1125 11:38:52.450844 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09d713da_8021_4bfa_b39d_bc3399593865.slice/crio-8c315cdb6fe61ce862adbc47e91216f61f33dff92e20a76f65e3ab56dd12d64b WatchSource:0}: Error finding container 8c315cdb6fe61ce862adbc47e91216f61f33dff92e20a76f65e3ab56dd12d64b: Status 404 returned error can't find the container with id 8c315cdb6fe61ce862adbc47e91216f61f33dff92e20a76f65e3ab56dd12d64b Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.466738 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svsw6" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.477430 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.480973 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-config\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.498080 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.509630 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-9z28x" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.509919 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-etcd-client\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.517146 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.528943 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-serving-cert\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.537359 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.555036 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6qlz\" (UniqueName: \"kubernetes.io/projected/daffec68-fec5-4f3b-9302-4b736b09fc9c-kube-api-access-h6qlz\") pod \"console-operator-58897d9998-qlr24\" (UID: \"daffec68-fec5-4f3b-9302-4b736b09fc9c\") " pod="openshift-console-operator/console-operator-58897d9998-qlr24" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.555082 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/96c3697f-cf07-44a2-af83-c6aae61f04f9-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-67c5m\" (UID: \"96c3697f-cf07-44a2-af83-c6aae61f04f9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-67c5m" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.555133 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.555172 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.555194 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/198b8b13-3d25-4fbb-81af-a2a39186b64d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-cs4td\" (UID: \"198b8b13-3d25-4fbb-81af-a2a39186b64d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cs4td" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.555213 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33afcb8d-d045-4897-af65-56b622cdfa58-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6hgvx\" (UID: \"33afcb8d-d045-4897-af65-56b622cdfa58\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6hgvx" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.555248 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3eaaf4f5-59b0-4ab7-a865-e962b59f0584-auth-proxy-config\") pod \"machine-approver-56656f9798-d9vjp\" (UID: \"3eaaf4f5-59b0-4ab7-a865-e962b59f0584\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d9vjp" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.555281 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hfdx\" (UniqueName: \"kubernetes.io/projected/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-kube-api-access-5hfdx\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.555319 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daffec68-fec5-4f3b-9302-4b736b09fc9c-serving-cert\") pod \"console-operator-58897d9998-qlr24\" (UID: \"daffec68-fec5-4f3b-9302-4b736b09fc9c\") " pod="openshift-console-operator/console-operator-58897d9998-qlr24" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.555341 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z5nk\" (UniqueName: \"kubernetes.io/projected/7f36936f-00b7-4fde-9c95-8fb3433aba0a-kube-api-access-9z5nk\") pod \"migrator-59844c95c7-jg4ng\" (UID: \"7f36936f-00b7-4fde-9c95-8fb3433aba0a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jg4ng" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.555403 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.555424 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw47q\" (UniqueName: \"kubernetes.io/projected/01b7a9e5-be6c-4a8e-9279-62eaf90e745d-kube-api-access-tw47q\") pod \"ingress-operator-5b745b69d9-jhptj\" (UID: \"01b7a9e5-be6c-4a8e-9279-62eaf90e745d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jhptj" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.555445 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/704d8383-2f51-4244-8a2a-3477cb15f23f-serving-cert\") pod \"etcd-operator-b45778765-2hpv7\" (UID: \"704d8383-2f51-4244-8a2a-3477cb15f23f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2hpv7" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.555467 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44180138-81cd-45b3-b14e-c21819b16645-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-fc942\" (UID: \"44180138-81cd-45b3-b14e-c21819b16645\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fc942" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.555489 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.555509 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcv28\" (UniqueName: \"kubernetes.io/projected/239de662-d89b-4e6e-a970-56811041192f-kube-api-access-dcv28\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.555531 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44180138-81cd-45b3-b14e-c21819b16645-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-fc942\" (UID: \"44180138-81cd-45b3-b14e-c21819b16645\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fc942" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.555579 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ab6319ba-e125-4775-83c3-c5624951d634-stats-auth\") pod \"router-default-5444994796-22mnp\" (UID: \"ab6319ba-e125-4775-83c3-c5624951d634\") " pod="openshift-ingress/router-default-5444994796-22mnp" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.555599 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/daffec68-fec5-4f3b-9302-4b736b09fc9c-trusted-ca\") pod \"console-operator-58897d9998-qlr24\" (UID: \"daffec68-fec5-4f3b-9302-4b736b09fc9c\") " pod="openshift-console-operator/console-operator-58897d9998-qlr24" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.555848 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/198b8b13-3d25-4fbb-81af-a2a39186b64d-proxy-tls\") pod \"machine-config-controller-84d6567774-cs4td\" (UID: \"198b8b13-3d25-4fbb-81af-a2a39186b64d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cs4td" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.555917 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/01b7a9e5-be6c-4a8e-9279-62eaf90e745d-metrics-tls\") pod \"ingress-operator-5b745b69d9-jhptj\" (UID: \"01b7a9e5-be6c-4a8e-9279-62eaf90e745d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jhptj" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.555999 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/01b7a9e5-be6c-4a8e-9279-62eaf90e745d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jhptj\" (UID: \"01b7a9e5-be6c-4a8e-9279-62eaf90e745d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jhptj" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.556029 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krn2d\" (UniqueName: \"kubernetes.io/projected/825f088d-44aa-4f48-b95d-6245da5b1775-kube-api-access-krn2d\") pod \"control-plane-machine-set-operator-78cbb6b69f-hhh7q\" (UID: \"825f088d-44aa-4f48-b95d-6245da5b1775\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hhh7q" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.556057 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wstr\" (UniqueName: \"kubernetes.io/projected/704d8383-2f51-4244-8a2a-3477cb15f23f-kube-api-access-8wstr\") pod \"etcd-operator-b45778765-2hpv7\" (UID: \"704d8383-2f51-4244-8a2a-3477cb15f23f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2hpv7" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.556117 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daffec68-fec5-4f3b-9302-4b736b09fc9c-config\") pod \"console-operator-58897d9998-qlr24\" (UID: \"daffec68-fec5-4f3b-9302-4b736b09fc9c\") " pod="openshift-console-operator/console-operator-58897d9998-qlr24" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.556173 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.556210 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96c3697f-cf07-44a2-af83-c6aae61f04f9-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-67c5m\" (UID: \"96c3697f-cf07-44a2-af83-c6aae61f04f9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-67c5m" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.556246 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.556282 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ab6319ba-e125-4775-83c3-c5624951d634-default-certificate\") pod \"router-default-5444994796-22mnp\" (UID: \"ab6319ba-e125-4775-83c3-c5624951d634\") " pod="openshift-ingress/router-default-5444994796-22mnp" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.556365 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-bound-sa-token\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.556419 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-trusted-ca\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.556449 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.556472 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/704d8383-2f51-4244-8a2a-3477cb15f23f-etcd-client\") pod \"etcd-operator-b45778765-2hpv7\" (UID: \"704d8383-2f51-4244-8a2a-3477cb15f23f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2hpv7" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.556500 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab6319ba-e125-4775-83c3-c5624951d634-service-ca-bundle\") pod \"router-default-5444994796-22mnp\" (UID: \"ab6319ba-e125-4775-83c3-c5624951d634\") " pod="openshift-ingress/router-default-5444994796-22mnp" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.556531 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eaaf4f5-59b0-4ab7-a865-e962b59f0584-config\") pod \"machine-approver-56656f9798-d9vjp\" (UID: \"3eaaf4f5-59b0-4ab7-a865-e962b59f0584\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d9vjp" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.556590 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/01b7a9e5-be6c-4a8e-9279-62eaf90e745d-trusted-ca\") pod \"ingress-operator-5b745b69d9-jhptj\" (UID: \"01b7a9e5-be6c-4a8e-9279-62eaf90e745d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jhptj" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.556632 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-registry-tls\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.556654 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/239de662-d89b-4e6e-a970-56811041192f-audit-dir\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.556674 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.556692 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab6319ba-e125-4775-83c3-c5624951d634-metrics-certs\") pod \"router-default-5444994796-22mnp\" (UID: \"ab6319ba-e125-4775-83c3-c5624951d634\") " pod="openshift-ingress/router-default-5444994796-22mnp" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.556720 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/704d8383-2f51-4244-8a2a-3477cb15f23f-config\") pod \"etcd-operator-b45778765-2hpv7\" (UID: \"704d8383-2f51-4244-8a2a-3477cb15f23f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2hpv7" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.556758 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.556779 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33afcb8d-d045-4897-af65-56b622cdfa58-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6hgvx\" (UID: \"33afcb8d-d045-4897-af65-56b622cdfa58\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6hgvx" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.556799 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.556842 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.556864 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96c3697f-cf07-44a2-af83-c6aae61f04f9-config\") pod \"kube-apiserver-operator-766d6c64bb-67c5m\" (UID: \"96c3697f-cf07-44a2-af83-c6aae61f04f9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-67c5m" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.556899 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-registry-certificates\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.556922 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.556942 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/704d8383-2f51-4244-8a2a-3477cb15f23f-etcd-ca\") pod \"etcd-operator-b45778765-2hpv7\" (UID: \"704d8383-2f51-4244-8a2a-3477cb15f23f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2hpv7" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.556960 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3eaaf4f5-59b0-4ab7-a865-e962b59f0584-machine-approver-tls\") pod \"machine-approver-56656f9798-d9vjp\" (UID: \"3eaaf4f5-59b0-4ab7-a865-e962b59f0584\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d9vjp" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.556983 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45hd5\" (UniqueName: \"kubernetes.io/projected/7cd3b65b-a0b4-4cee-87ac-23925d36acb8-kube-api-access-45hd5\") pod \"dns-operator-744455d44c-mnv7h\" (UID: \"7cd3b65b-a0b4-4cee-87ac-23925d36acb8\") " pod="openshift-dns-operator/dns-operator-744455d44c-mnv7h" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.557021 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5shb\" (UniqueName: \"kubernetes.io/projected/0820aa13-f7b2-403e-9d85-1f940abae603-kube-api-access-w5shb\") pod \"kube-storage-version-migrator-operator-b67b599dd-99vrx\" (UID: \"0820aa13-f7b2-403e-9d85-1f940abae603\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-99vrx" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.557100 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/704d8383-2f51-4244-8a2a-3477cb15f23f-etcd-service-ca\") pod \"etcd-operator-b45778765-2hpv7\" (UID: \"704d8383-2f51-4244-8a2a-3477cb15f23f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2hpv7" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.557130 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.557151 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7cd3b65b-a0b4-4cee-87ac-23925d36acb8-metrics-tls\") pod \"dns-operator-744455d44c-mnv7h\" (UID: \"7cd3b65b-a0b4-4cee-87ac-23925d36acb8\") " pod="openshift-dns-operator/dns-operator-744455d44c-mnv7h" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.557192 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0820aa13-f7b2-403e-9d85-1f940abae603-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-99vrx\" (UID: \"0820aa13-f7b2-403e-9d85-1f940abae603\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-99vrx" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.557237 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqfw8\" (UniqueName: \"kubernetes.io/projected/198b8b13-3d25-4fbb-81af-a2a39186b64d-kube-api-access-bqfw8\") pod \"machine-config-controller-84d6567774-cs4td\" (UID: \"198b8b13-3d25-4fbb-81af-a2a39186b64d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cs4td" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.557261 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/825f088d-44aa-4f48-b95d-6245da5b1775-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hhh7q\" (UID: \"825f088d-44aa-4f48-b95d-6245da5b1775\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hhh7q" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.557335 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44180138-81cd-45b3-b14e-c21819b16645-config\") pod \"kube-controller-manager-operator-78b949d7b-fc942\" (UID: \"44180138-81cd-45b3-b14e-c21819b16645\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fc942" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.557367 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzlwf\" (UniqueName: \"kubernetes.io/projected/3eaaf4f5-59b0-4ab7-a865-e962b59f0584-kube-api-access-zzlwf\") pod \"machine-approver-56656f9798-d9vjp\" (UID: \"3eaaf4f5-59b0-4ab7-a865-e962b59f0584\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d9vjp" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.557399 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp9cx\" (UniqueName: \"kubernetes.io/projected/ab6319ba-e125-4775-83c3-c5624951d634-kube-api-access-vp9cx\") pod \"router-default-5444994796-22mnp\" (UID: \"ab6319ba-e125-4775-83c3-c5624951d634\") " pod="openshift-ingress/router-default-5444994796-22mnp" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.557422 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/33afcb8d-d045-4897-af65-56b622cdfa58-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6hgvx\" (UID: \"33afcb8d-d045-4897-af65-56b622cdfa58\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6hgvx" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.557459 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.557482 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/239de662-d89b-4e6e-a970-56811041192f-audit-policies\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.557514 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0820aa13-f7b2-403e-9d85-1f940abae603-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-99vrx\" (UID: \"0820aa13-f7b2-403e-9d85-1f940abae603\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-99vrx" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.556841 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 25 11:38:52 crc kubenswrapper[4706]: E1125 11:38:52.564186 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:53.064155077 +0000 UTC m=+141.978712458 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.567081 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-image-import-ca\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.578884 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.583644 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-etcd-serving-ca\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.586197 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j7x2j" event={"ID":"8cd4c256-91b7-4b76-a9d3-6927ea77e61e","Type":"ContainerStarted","Data":"ab384ce4e7c7b861b8b5646b14e994534e5e8213032d88f360cc56c5341f714f"} Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.587695 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j7x2j" event={"ID":"8cd4c256-91b7-4b76-a9d3-6927ea77e61e","Type":"ContainerStarted","Data":"ce3c60198e11b985d403328021a23d9ba4f0f30ea762a0582de78380240dc2eb"} Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.587739 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j7x2j" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.595726 4706 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-j7x2j container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.595841 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j7x2j" podUID="8cd4c256-91b7-4b76-a9d3-6927ea77e61e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.598255 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zf4pd" event={"ID":"c31bc178-49e3-4bb8-a6d0-ca9e27662b9a","Type":"ContainerStarted","Data":"ca43a5ab551800e1a7600a9c40946c9b8821c5bd86df830dc16ccfede1c21037"} Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.598324 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zf4pd" event={"ID":"c31bc178-49e3-4bb8-a6d0-ca9e27662b9a","Type":"ContainerStarted","Data":"2889822a2c9c2c44c23ec80ec811bdc010023ca3ec00ab853e494408c01e510f"} Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.598345 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-zf4pd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.599410 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.605083 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8rnp5" event={"ID":"9cf8aff4-1c08-49a5-82c9-92ac18f0b46f","Type":"ContainerStarted","Data":"6346eb262cf583e69a693a273d0c5dd160c663f4d5c849516e3f1f3e37407333"} Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.605146 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8rnp5" event={"ID":"9cf8aff4-1c08-49a5-82c9-92ac18f0b46f","Type":"ContainerStarted","Data":"3a35c4fdb3465a16fe1ac9ebfe84e922423455e5b6321a3083a653c8c07f194d"} Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.607989 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6nqn" event={"ID":"09d713da-8021-4bfa-b39d-bc3399593865","Type":"ContainerStarted","Data":"8c315cdb6fe61ce862adbc47e91216f61f33dff92e20a76f65e3ab56dd12d64b"} Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.609488 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-qm76l" event={"ID":"ad44dafa-6c78-4773-881b-6f3adeb1a29b","Type":"ContainerStarted","Data":"a233cc82f92f59856f0271218a7264127ff0938f2831afdc11ea4c33c5599cd8"} Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.609521 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-qm76l" event={"ID":"ad44dafa-6c78-4773-881b-6f3adeb1a29b","Type":"ContainerStarted","Data":"abf5cff9e995e294fbf164f4705d4bda3f9eaf0c60108b05411a390404f8e13e"} Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.617208 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q7gsh" event={"ID":"f31f7e75-5a0b-4519-bbe7-521544fa61c1","Type":"ContainerStarted","Data":"c23adfbf53b7d9c80ff128adbde425c1965ed534132d8f5254260199fdf73a73"} Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.617272 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q7gsh" event={"ID":"f31f7e75-5a0b-4519-bbe7-521544fa61c1","Type":"ContainerStarted","Data":"da8d8003a4d33f09f061ffcea56a630f232d6335b8364bbe4247066d85945ee0"} Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.617293 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q7gsh" event={"ID":"f31f7e75-5a0b-4519-bbe7-521544fa61c1","Type":"ContainerStarted","Data":"83b267a729eec24f8f95e4de42e99b936c9ce38c9c97ad96c2d1185a686bb7cc"} Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.622725 4706 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-zf4pd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.622821 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-zf4pd" podUID="c31bc178-49e3-4bb8-a6d0-ca9e27662b9a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.630337 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 25 11:38:52 crc kubenswrapper[4706]: E1125 11:38:52.630940 4706 projected.go:194] Error preparing data for projected volume kube-api-access-bvz6z for pod openshift-apiserver/apiserver-76f77b778f-jsj27: failed to sync configmap cache: timed out waiting for the condition Nov 25 11:38:52 crc kubenswrapper[4706]: E1125 11:38:52.631084 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-kube-api-access-bvz6z podName:d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a nodeName:}" failed. No retries permitted until 2025-11-25 11:38:53.131055214 +0000 UTC m=+142.045612595 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bvz6z" (UniqueName: "kubernetes.io/projected/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-kube-api-access-bvz6z") pod "apiserver-76f77b778f-jsj27" (UID: "d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a") : failed to sync configmap cache: timed out waiting for the condition Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.656871 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-8f48m"] Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.661570 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.661961 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3172a49-2bd1-4003-8ef0-560d4522e410-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rs94g\" (UID: \"d3172a49-2bd1-4003-8ef0-560d4522e410\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rs94g" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.662039 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv6t6\" (UniqueName: \"kubernetes.io/projected/4fbe2538-0d5f-48c2-8819-7bb0386b2710-kube-api-access-sv6t6\") pod \"catalog-operator-68c6474976-s9mkm\" (UID: \"4fbe2538-0d5f-48c2-8819-7bb0386b2710\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9mkm" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.662069 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7eaffd03-b03a-491f-9bc3-250a1f9021e7-config\") pod \"service-ca-operator-777779d784-nqt58\" (UID: \"7eaffd03-b03a-491f-9bc3-250a1f9021e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nqt58" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.662107 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/eea9f096-83bc-4f8c-b405-390011a0dd7e-signing-key\") pod \"service-ca-9c57cc56f-vpgtz\" (UID: \"eea9f096-83bc-4f8c-b405-390011a0dd7e\") " pod="openshift-service-ca/service-ca-9c57cc56f-vpgtz" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.662135 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/704d8383-2f51-4244-8a2a-3477cb15f23f-etcd-service-ca\") pod \"etcd-operator-b45778765-2hpv7\" (UID: \"704d8383-2f51-4244-8a2a-3477cb15f23f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2hpv7" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.662182 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.662200 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7cd3b65b-a0b4-4cee-87ac-23925d36acb8-metrics-tls\") pod \"dns-operator-744455d44c-mnv7h\" (UID: \"7cd3b65b-a0b4-4cee-87ac-23925d36acb8\") " pod="openshift-dns-operator/dns-operator-744455d44c-mnv7h" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.662218 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/92d6e6ef-5880-4bdf-bdc5-5d2c4591a094-webhook-cert\") pod \"packageserver-d55dfcdfc-bthtj\" (UID: \"92d6e6ef-5880-4bdf-bdc5-5d2c4591a094\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bthtj" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.662252 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/01c8d08c-1ad6-4048-92d4-98382da66cca-plugins-dir\") pod \"csi-hostpathplugin-tgngn\" (UID: \"01c8d08c-1ad6-4048-92d4-98382da66cca\") " pod="hostpath-provisioner/csi-hostpathplugin-tgngn" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.662275 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0820aa13-f7b2-403e-9d85-1f940abae603-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-99vrx\" (UID: \"0820aa13-f7b2-403e-9d85-1f940abae603\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-99vrx" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.662342 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqfw8\" (UniqueName: \"kubernetes.io/projected/198b8b13-3d25-4fbb-81af-a2a39186b64d-kube-api-access-bqfw8\") pod \"machine-config-controller-84d6567774-cs4td\" (UID: \"198b8b13-3d25-4fbb-81af-a2a39186b64d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cs4td" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.662365 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f0084f7d-107a-484b-bc35-04f9585e0e2b-node-bootstrap-token\") pod \"machine-config-server-446sw\" (UID: \"f0084f7d-107a-484b-bc35-04f9585e0e2b\") " pod="openshift-machine-config-operator/machine-config-server-446sw" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.662417 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/825f088d-44aa-4f48-b95d-6245da5b1775-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hhh7q\" (UID: \"825f088d-44aa-4f48-b95d-6245da5b1775\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hhh7q" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.662437 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-954mw\" (UniqueName: \"kubernetes.io/projected/51a87a4e-3d58-48e0-b455-292aa206e149-kube-api-access-954mw\") pod \"collect-profiles-29401170-s4f7r\" (UID: \"51a87a4e-3d58-48e0-b455-292aa206e149\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401170-s4f7r" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.662458 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7eaffd03-b03a-491f-9bc3-250a1f9021e7-serving-cert\") pod \"service-ca-operator-777779d784-nqt58\" (UID: \"7eaffd03-b03a-491f-9bc3-250a1f9021e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nqt58" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.662496 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44180138-81cd-45b3-b14e-c21819b16645-config\") pod \"kube-controller-manager-operator-78b949d7b-fc942\" (UID: \"44180138-81cd-45b3-b14e-c21819b16645\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fc942" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.662518 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd8d3bba-bf4e-4bda-94ff-ce2902b3299a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zn9dk\" (UID: \"bd8d3bba-bf4e-4bda-94ff-ce2902b3299a\") " pod="openshift-marketplace/marketplace-operator-79b997595-zn9dk" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.662542 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzlwf\" (UniqueName: \"kubernetes.io/projected/3eaaf4f5-59b0-4ab7-a865-e962b59f0584-kube-api-access-zzlwf\") pod \"machine-approver-56656f9798-d9vjp\" (UID: \"3eaaf4f5-59b0-4ab7-a865-e962b59f0584\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d9vjp" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.662590 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp9cx\" (UniqueName: \"kubernetes.io/projected/ab6319ba-e125-4775-83c3-c5624951d634-kube-api-access-vp9cx\") pod \"router-default-5444994796-22mnp\" (UID: \"ab6319ba-e125-4775-83c3-c5624951d634\") " pod="openshift-ingress/router-default-5444994796-22mnp" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.662610 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51a87a4e-3d58-48e0-b455-292aa206e149-secret-volume\") pod \"collect-profiles-29401170-s4f7r\" (UID: \"51a87a4e-3d58-48e0-b455-292aa206e149\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401170-s4f7r" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.662633 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/33afcb8d-d045-4897-af65-56b622cdfa58-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6hgvx\" (UID: \"33afcb8d-d045-4897-af65-56b622cdfa58\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6hgvx" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.662671 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/01c8d08c-1ad6-4048-92d4-98382da66cca-mountpoint-dir\") pod \"csi-hostpathplugin-tgngn\" (UID: \"01c8d08c-1ad6-4048-92d4-98382da66cca\") " pod="hostpath-provisioner/csi-hostpathplugin-tgngn" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.662703 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.662808 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/239de662-d89b-4e6e-a970-56811041192f-audit-policies\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.662840 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bgl7\" (UniqueName: \"kubernetes.io/projected/f1ac94b4-787a-4778-8891-84b37d9e7565-kube-api-access-9bgl7\") pod \"ingress-canary-q466t\" (UID: \"f1ac94b4-787a-4778-8891-84b37d9e7565\") " pod="openshift-ingress-canary/ingress-canary-q466t" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.662892 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/916f095b-bd5f-497f-8771-aff8fd799255-config-volume\") pod \"dns-default-wswtg\" (UID: \"916f095b-bd5f-497f-8771-aff8fd799255\") " pod="openshift-dns/dns-default-wswtg" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.662955 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0820aa13-f7b2-403e-9d85-1f940abae603-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-99vrx\" (UID: \"0820aa13-f7b2-403e-9d85-1f940abae603\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-99vrx" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.662976 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bd8d3bba-bf4e-4bda-94ff-ce2902b3299a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zn9dk\" (UID: \"bd8d3bba-bf4e-4bda-94ff-ce2902b3299a\") " pod="openshift-marketplace/marketplace-operator-79b997595-zn9dk" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.662999 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6qlz\" (UniqueName: \"kubernetes.io/projected/daffec68-fec5-4f3b-9302-4b736b09fc9c-kube-api-access-h6qlz\") pod \"console-operator-58897d9998-qlr24\" (UID: \"daffec68-fec5-4f3b-9302-4b736b09fc9c\") " pod="openshift-console-operator/console-operator-58897d9998-qlr24" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.663044 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/96c3697f-cf07-44a2-af83-c6aae61f04f9-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-67c5m\" (UID: \"96c3697f-cf07-44a2-af83-c6aae61f04f9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-67c5m" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.663074 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.663092 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/01c8d08c-1ad6-4048-92d4-98382da66cca-socket-dir\") pod \"csi-hostpathplugin-tgngn\" (UID: \"01c8d08c-1ad6-4048-92d4-98382da66cca\") " pod="hostpath-provisioner/csi-hostpathplugin-tgngn" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.663179 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrvp9\" (UniqueName: \"kubernetes.io/projected/eea9f096-83bc-4f8c-b405-390011a0dd7e-kube-api-access-qrvp9\") pod \"service-ca-9c57cc56f-vpgtz\" (UID: \"eea9f096-83bc-4f8c-b405-390011a0dd7e\") " pod="openshift-service-ca/service-ca-9c57cc56f-vpgtz" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.663204 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqf6k\" (UniqueName: \"kubernetes.io/projected/f0084f7d-107a-484b-bc35-04f9585e0e2b-kube-api-access-zqf6k\") pod \"machine-config-server-446sw\" (UID: \"f0084f7d-107a-484b-bc35-04f9585e0e2b\") " pod="openshift-machine-config-operator/machine-config-server-446sw" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.663248 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.663269 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/198b8b13-3d25-4fbb-81af-a2a39186b64d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-cs4td\" (UID: \"198b8b13-3d25-4fbb-81af-a2a39186b64d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cs4td" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.663294 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33afcb8d-d045-4897-af65-56b622cdfa58-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6hgvx\" (UID: \"33afcb8d-d045-4897-af65-56b622cdfa58\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6hgvx" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.663359 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3eaaf4f5-59b0-4ab7-a865-e962b59f0584-auth-proxy-config\") pod \"machine-approver-56656f9798-d9vjp\" (UID: \"3eaaf4f5-59b0-4ab7-a865-e962b59f0584\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d9vjp" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.663397 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/55479c26-471b-4a9c-9d70-ec107786bbc4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-tf2kg\" (UID: \"55479c26-471b-4a9c-9d70-ec107786bbc4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tf2kg" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.663420 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hfdx\" (UniqueName: \"kubernetes.io/projected/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-kube-api-access-5hfdx\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.663439 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daffec68-fec5-4f3b-9302-4b736b09fc9c-serving-cert\") pod \"console-operator-58897d9998-qlr24\" (UID: \"daffec68-fec5-4f3b-9302-4b736b09fc9c\") " pod="openshift-console-operator/console-operator-58897d9998-qlr24" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.663476 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z5nk\" (UniqueName: \"kubernetes.io/projected/7f36936f-00b7-4fde-9c95-8fb3433aba0a-kube-api-access-9z5nk\") pod \"migrator-59844c95c7-jg4ng\" (UID: \"7f36936f-00b7-4fde-9c95-8fb3433aba0a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jg4ng" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.663499 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.663517 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw47q\" (UniqueName: \"kubernetes.io/projected/01b7a9e5-be6c-4a8e-9279-62eaf90e745d-kube-api-access-tw47q\") pod \"ingress-operator-5b745b69d9-jhptj\" (UID: \"01b7a9e5-be6c-4a8e-9279-62eaf90e745d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jhptj" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.663554 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/704d8383-2f51-4244-8a2a-3477cb15f23f-serving-cert\") pod \"etcd-operator-b45778765-2hpv7\" (UID: \"704d8383-2f51-4244-8a2a-3477cb15f23f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2hpv7" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.663575 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44180138-81cd-45b3-b14e-c21819b16645-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-fc942\" (UID: \"44180138-81cd-45b3-b14e-c21819b16645\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fc942" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.663593 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cb5c8374-6eb8-4247-97e3-ff94307782ef-srv-cert\") pod \"olm-operator-6b444d44fb-x7b2m\" (UID: \"cb5c8374-6eb8-4247-97e3-ff94307782ef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7b2m" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.663611 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/92d6e6ef-5880-4bdf-bdc5-5d2c4591a094-tmpfs\") pod \"packageserver-d55dfcdfc-bthtj\" (UID: \"92d6e6ef-5880-4bdf-bdc5-5d2c4591a094\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bthtj" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.663886 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0820aa13-f7b2-403e-9d85-1f940abae603-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-99vrx\" (UID: \"0820aa13-f7b2-403e-9d85-1f940abae603\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-99vrx" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.663975 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/92d6e6ef-5880-4bdf-bdc5-5d2c4591a094-apiservice-cert\") pod \"packageserver-d55dfcdfc-bthtj\" (UID: \"92d6e6ef-5880-4bdf-bdc5-5d2c4591a094\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bthtj" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.664006 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.664048 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcv28\" (UniqueName: \"kubernetes.io/projected/239de662-d89b-4e6e-a970-56811041192f-kube-api-access-dcv28\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.664068 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44180138-81cd-45b3-b14e-c21819b16645-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-fc942\" (UID: \"44180138-81cd-45b3-b14e-c21819b16645\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fc942" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.664091 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ab6319ba-e125-4775-83c3-c5624951d634-stats-auth\") pod \"router-default-5444994796-22mnp\" (UID: \"ab6319ba-e125-4775-83c3-c5624951d634\") " pod="openshift-ingress/router-default-5444994796-22mnp" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.664125 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/daffec68-fec5-4f3b-9302-4b736b09fc9c-trusted-ca\") pod \"console-operator-58897d9998-qlr24\" (UID: \"daffec68-fec5-4f3b-9302-4b736b09fc9c\") " pod="openshift-console-operator/console-operator-58897d9998-qlr24" Nov 25 11:38:52 crc kubenswrapper[4706]: E1125 11:38:52.664607 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:53.164542734 +0000 UTC m=+142.079100115 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.667569 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/704d8383-2f51-4244-8a2a-3477cb15f23f-etcd-service-ca\") pod \"etcd-operator-b45778765-2hpv7\" (UID: \"704d8383-2f51-4244-8a2a-3477cb15f23f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2hpv7" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.668349 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/916f095b-bd5f-497f-8771-aff8fd799255-metrics-tls\") pod \"dns-default-wswtg\" (UID: \"916f095b-bd5f-497f-8771-aff8fd799255\") " pod="openshift-dns/dns-default-wswtg" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.668433 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51a87a4e-3d58-48e0-b455-292aa206e149-config-volume\") pod \"collect-profiles-29401170-s4f7r\" (UID: \"51a87a4e-3d58-48e0-b455-292aa206e149\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401170-s4f7r" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.668490 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/01c8d08c-1ad6-4048-92d4-98382da66cca-csi-data-dir\") pod \"csi-hostpathplugin-tgngn\" (UID: \"01c8d08c-1ad6-4048-92d4-98382da66cca\") " pod="hostpath-provisioner/csi-hostpathplugin-tgngn" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.668526 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f1ac94b4-787a-4778-8891-84b37d9e7565-cert\") pod \"ingress-canary-q466t\" (UID: \"f1ac94b4-787a-4778-8891-84b37d9e7565\") " pod="openshift-ingress-canary/ingress-canary-q466t" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.668569 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/eea9f096-83bc-4f8c-b405-390011a0dd7e-signing-cabundle\") pod \"service-ca-9c57cc56f-vpgtz\" (UID: \"eea9f096-83bc-4f8c-b405-390011a0dd7e\") " pod="openshift-service-ca/service-ca-9c57cc56f-vpgtz" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.668597 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/55479c26-471b-4a9c-9d70-ec107786bbc4-images\") pod \"machine-config-operator-74547568cd-tf2kg\" (UID: \"55479c26-471b-4a9c-9d70-ec107786bbc4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tf2kg" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.668614 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv7qm\" (UniqueName: \"kubernetes.io/projected/916f095b-bd5f-497f-8771-aff8fd799255-kube-api-access-kv7qm\") pod \"dns-default-wswtg\" (UID: \"916f095b-bd5f-497f-8771-aff8fd799255\") " pod="openshift-dns/dns-default-wswtg" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.668657 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/198b8b13-3d25-4fbb-81af-a2a39186b64d-proxy-tls\") pod \"machine-config-controller-84d6567774-cs4td\" (UID: \"198b8b13-3d25-4fbb-81af-a2a39186b64d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cs4td" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.668677 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/01b7a9e5-be6c-4a8e-9279-62eaf90e745d-metrics-tls\") pod \"ingress-operator-5b745b69d9-jhptj\" (UID: \"01b7a9e5-be6c-4a8e-9279-62eaf90e745d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jhptj" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.668721 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/01b7a9e5-be6c-4a8e-9279-62eaf90e745d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jhptj\" (UID: \"01b7a9e5-be6c-4a8e-9279-62eaf90e745d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jhptj" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.668744 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krn2d\" (UniqueName: \"kubernetes.io/projected/825f088d-44aa-4f48-b95d-6245da5b1775-kube-api-access-krn2d\") pod \"control-plane-machine-set-operator-78cbb6b69f-hhh7q\" (UID: \"825f088d-44aa-4f48-b95d-6245da5b1775\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hhh7q" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.668766 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cb8f2779-a7df-4ead-a209-9e8024e20647-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fh2jc\" (UID: \"cb8f2779-a7df-4ead-a209-9e8024e20647\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fh2jc" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.668851 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daffec68-fec5-4f3b-9302-4b736b09fc9c-config\") pod \"console-operator-58897d9998-qlr24\" (UID: \"daffec68-fec5-4f3b-9302-4b736b09fc9c\") " pod="openshift-console-operator/console-operator-58897d9998-qlr24" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.668905 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wstr\" (UniqueName: \"kubernetes.io/projected/704d8383-2f51-4244-8a2a-3477cb15f23f-kube-api-access-8wstr\") pod \"etcd-operator-b45778765-2hpv7\" (UID: \"704d8383-2f51-4244-8a2a-3477cb15f23f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2hpv7" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.668934 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4fbe2538-0d5f-48c2-8819-7bb0386b2710-srv-cert\") pod \"catalog-operator-68c6474976-s9mkm\" (UID: \"4fbe2538-0d5f-48c2-8819-7bb0386b2710\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9mkm" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.668955 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7cd3b65b-a0b4-4cee-87ac-23925d36acb8-metrics-tls\") pod \"dns-operator-744455d44c-mnv7h\" (UID: \"7cd3b65b-a0b4-4cee-87ac-23925d36acb8\") " pod="openshift-dns-operator/dns-operator-744455d44c-mnv7h" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.668974 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/01c8d08c-1ad6-4048-92d4-98382da66cca-registration-dir\") pod \"csi-hostpathplugin-tgngn\" (UID: \"01c8d08c-1ad6-4048-92d4-98382da66cca\") " pod="hostpath-provisioner/csi-hostpathplugin-tgngn" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.669111 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqsnq\" (UniqueName: \"kubernetes.io/projected/92d6e6ef-5880-4bdf-bdc5-5d2c4591a094-kube-api-access-vqsnq\") pod \"packageserver-d55dfcdfc-bthtj\" (UID: \"92d6e6ef-5880-4bdf-bdc5-5d2c4591a094\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bthtj" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.669187 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.669225 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96c3697f-cf07-44a2-af83-c6aae61f04f9-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-67c5m\" (UID: \"96c3697f-cf07-44a2-af83-c6aae61f04f9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-67c5m" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.669255 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpgpj\" (UniqueName: \"kubernetes.io/projected/d3172a49-2bd1-4003-8ef0-560d4522e410-kube-api-access-fpgpj\") pod \"package-server-manager-789f6589d5-rs94g\" (UID: \"d3172a49-2bd1-4003-8ef0-560d4522e410\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rs94g" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.669292 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.669342 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ab6319ba-e125-4775-83c3-c5624951d634-default-certificate\") pod \"router-default-5444994796-22mnp\" (UID: \"ab6319ba-e125-4775-83c3-c5624951d634\") " pod="openshift-ingress/router-default-5444994796-22mnp" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.669391 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjh2n\" (UniqueName: \"kubernetes.io/projected/cb8f2779-a7df-4ead-a209-9e8024e20647-kube-api-access-pjh2n\") pod \"multus-admission-controller-857f4d67dd-fh2jc\" (UID: \"cb8f2779-a7df-4ead-a209-9e8024e20647\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fh2jc" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.669447 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-bound-sa-token\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.669494 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-trusted-ca\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.669520 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.672898 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.669545 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/704d8383-2f51-4244-8a2a-3477cb15f23f-etcd-client\") pod \"etcd-operator-b45778765-2hpv7\" (UID: \"704d8383-2f51-4244-8a2a-3477cb15f23f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2hpv7" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.673911 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab6319ba-e125-4775-83c3-c5624951d634-service-ca-bundle\") pod \"router-default-5444994796-22mnp\" (UID: \"ab6319ba-e125-4775-83c3-c5624951d634\") " pod="openshift-ingress/router-default-5444994796-22mnp" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.673941 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eaaf4f5-59b0-4ab7-a865-e962b59f0584-config\") pod \"machine-approver-56656f9798-d9vjp\" (UID: \"3eaaf4f5-59b0-4ab7-a865-e962b59f0584\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d9vjp" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.673964 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/01b7a9e5-be6c-4a8e-9279-62eaf90e745d-trusted-ca\") pod \"ingress-operator-5b745b69d9-jhptj\" (UID: \"01b7a9e5-be6c-4a8e-9279-62eaf90e745d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jhptj" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.673996 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-registry-tls\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.674038 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8kmf\" (UniqueName: \"kubernetes.io/projected/55479c26-471b-4a9c-9d70-ec107786bbc4-kube-api-access-d8kmf\") pod \"machine-config-operator-74547568cd-tf2kg\" (UID: \"55479c26-471b-4a9c-9d70-ec107786bbc4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tf2kg" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.674084 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/239de662-d89b-4e6e-a970-56811041192f-audit-dir\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.674115 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.674142 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4fbe2538-0d5f-48c2-8819-7bb0386b2710-profile-collector-cert\") pod \"catalog-operator-68c6474976-s9mkm\" (UID: \"4fbe2538-0d5f-48c2-8819-7bb0386b2710\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9mkm" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.674166 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvr7l\" (UniqueName: \"kubernetes.io/projected/cb5c8374-6eb8-4247-97e3-ff94307782ef-kube-api-access-dvr7l\") pod \"olm-operator-6b444d44fb-x7b2m\" (UID: \"cb5c8374-6eb8-4247-97e3-ff94307782ef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7b2m" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.674194 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab6319ba-e125-4775-83c3-c5624951d634-metrics-certs\") pod \"router-default-5444994796-22mnp\" (UID: \"ab6319ba-e125-4775-83c3-c5624951d634\") " pod="openshift-ingress/router-default-5444994796-22mnp" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.674226 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p82m7\" (UniqueName: \"kubernetes.io/projected/7eaffd03-b03a-491f-9bc3-250a1f9021e7-kube-api-access-p82m7\") pod \"service-ca-operator-777779d784-nqt58\" (UID: \"7eaffd03-b03a-491f-9bc3-250a1f9021e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nqt58" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.674255 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/704d8383-2f51-4244-8a2a-3477cb15f23f-config\") pod \"etcd-operator-b45778765-2hpv7\" (UID: \"704d8383-2f51-4244-8a2a-3477cb15f23f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2hpv7" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.680047 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44180138-81cd-45b3-b14e-c21819b16645-config\") pod \"kube-controller-manager-operator-78b949d7b-fc942\" (UID: \"44180138-81cd-45b3-b14e-c21819b16645\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fc942" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.680403 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33afcb8d-d045-4897-af65-56b622cdfa58-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6hgvx\" (UID: \"33afcb8d-d045-4897-af65-56b622cdfa58\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6hgvx" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.681126 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/825f088d-44aa-4f48-b95d-6245da5b1775-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hhh7q\" (UID: \"825f088d-44aa-4f48-b95d-6245da5b1775\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hhh7q" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.681702 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.681925 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.682045 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/239de662-d89b-4e6e-a970-56811041192f-audit-dir\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.683504 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/daffec68-fec5-4f3b-9302-4b736b09fc9c-trusted-ca\") pod \"console-operator-58897d9998-qlr24\" (UID: \"daffec68-fec5-4f3b-9302-4b736b09fc9c\") " pod="openshift-console-operator/console-operator-58897d9998-qlr24" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.683620 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4sd5\" (UniqueName: \"kubernetes.io/projected/01c8d08c-1ad6-4048-92d4-98382da66cca-kube-api-access-f4sd5\") pod \"csi-hostpathplugin-tgngn\" (UID: \"01c8d08c-1ad6-4048-92d4-98382da66cca\") " pod="hostpath-provisioner/csi-hostpathplugin-tgngn" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.683677 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.683717 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33afcb8d-d045-4897-af65-56b622cdfa58-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6hgvx\" (UID: \"33afcb8d-d045-4897-af65-56b622cdfa58\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6hgvx" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.683752 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/55479c26-471b-4a9c-9d70-ec107786bbc4-proxy-tls\") pod \"machine-config-operator-74547568cd-tf2kg\" (UID: \"55479c26-471b-4a9c-9d70-ec107786bbc4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tf2kg" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.683787 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.683854 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.683882 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96c3697f-cf07-44a2-af83-c6aae61f04f9-config\") pod \"kube-apiserver-operator-766d6c64bb-67c5m\" (UID: \"96c3697f-cf07-44a2-af83-c6aae61f04f9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-67c5m" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.686072 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcwc6\" (UniqueName: \"kubernetes.io/projected/bd8d3bba-bf4e-4bda-94ff-ce2902b3299a-kube-api-access-kcwc6\") pod \"marketplace-operator-79b997595-zn9dk\" (UID: \"bd8d3bba-bf4e-4bda-94ff-ce2902b3299a\") " pod="openshift-marketplace/marketplace-operator-79b997595-zn9dk" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.686118 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-registry-certificates\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.686157 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.686693 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-registry-tls\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.689254 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab6319ba-e125-4775-83c3-c5624951d634-service-ca-bundle\") pod \"router-default-5444994796-22mnp\" (UID: \"ab6319ba-e125-4775-83c3-c5624951d634\") " pod="openshift-ingress/router-default-5444994796-22mnp" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.689314 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.690067 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/704d8383-2f51-4244-8a2a-3477cb15f23f-etcd-client\") pod \"etcd-operator-b45778765-2hpv7\" (UID: \"704d8383-2f51-4244-8a2a-3477cb15f23f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2hpv7" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.684143 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ab6319ba-e125-4775-83c3-c5624951d634-stats-auth\") pod \"router-default-5444994796-22mnp\" (UID: \"ab6319ba-e125-4775-83c3-c5624951d634\") " pod="openshift-ingress/router-default-5444994796-22mnp" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.684710 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/704d8383-2f51-4244-8a2a-3477cb15f23f-config\") pod \"etcd-operator-b45778765-2hpv7\" (UID: \"704d8383-2f51-4244-8a2a-3477cb15f23f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2hpv7" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.691650 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eaaf4f5-59b0-4ab7-a865-e962b59f0584-config\") pod \"machine-approver-56656f9798-d9vjp\" (UID: \"3eaaf4f5-59b0-4ab7-a865-e962b59f0584\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d9vjp" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.691182 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/704d8383-2f51-4244-8a2a-3477cb15f23f-etcd-ca\") pod \"etcd-operator-b45778765-2hpv7\" (UID: \"704d8383-2f51-4244-8a2a-3477cb15f23f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2hpv7" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.692062 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3eaaf4f5-59b0-4ab7-a865-e962b59f0584-machine-approver-tls\") pod \"machine-approver-56656f9798-d9vjp\" (UID: \"3eaaf4f5-59b0-4ab7-a865-e962b59f0584\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d9vjp" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.692439 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45hd5\" (UniqueName: \"kubernetes.io/projected/7cd3b65b-a0b4-4cee-87ac-23925d36acb8-kube-api-access-45hd5\") pod \"dns-operator-744455d44c-mnv7h\" (UID: \"7cd3b65b-a0b4-4cee-87ac-23925d36acb8\") " pod="openshift-dns-operator/dns-operator-744455d44c-mnv7h" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.692510 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cb5c8374-6eb8-4247-97e3-ff94307782ef-profile-collector-cert\") pod \"olm-operator-6b444d44fb-x7b2m\" (UID: \"cb5c8374-6eb8-4247-97e3-ff94307782ef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7b2m" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.692606 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.692613 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5shb\" (UniqueName: \"kubernetes.io/projected/0820aa13-f7b2-403e-9d85-1f940abae603-kube-api-access-w5shb\") pod \"kube-storage-version-migrator-operator-b67b599dd-99vrx\" (UID: \"0820aa13-f7b2-403e-9d85-1f940abae603\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-99vrx" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.692706 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f0084f7d-107a-484b-bc35-04f9585e0e2b-certs\") pod \"machine-config-server-446sw\" (UID: \"f0084f7d-107a-484b-bc35-04f9585e0e2b\") " pod="openshift-machine-config-operator/machine-config-server-446sw" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.693643 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/704d8383-2f51-4244-8a2a-3477cb15f23f-etcd-ca\") pod \"etcd-operator-b45778765-2hpv7\" (UID: \"704d8383-2f51-4244-8a2a-3477cb15f23f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2hpv7" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.693924 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daffec68-fec5-4f3b-9302-4b736b09fc9c-config\") pod \"console-operator-58897d9998-qlr24\" (UID: \"daffec68-fec5-4f3b-9302-4b736b09fc9c\") " pod="openshift-console-operator/console-operator-58897d9998-qlr24" Nov 25 11:38:52 crc kubenswrapper[4706]: E1125 11:38:52.694483 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:53.194457566 +0000 UTC m=+142.109015147 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.695788 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.696554 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.696922 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.697807 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab6319ba-e125-4775-83c3-c5624951d634-metrics-certs\") pod \"router-default-5444994796-22mnp\" (UID: \"ab6319ba-e125-4775-83c3-c5624951d634\") " pod="openshift-ingress/router-default-5444994796-22mnp" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.698200 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.698400 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96c3697f-cf07-44a2-af83-c6aae61f04f9-config\") pod \"kube-apiserver-operator-766d6c64bb-67c5m\" (UID: \"96c3697f-cf07-44a2-af83-c6aae61f04f9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-67c5m" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.699354 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/239de662-d89b-4e6e-a970-56811041192f-audit-policies\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.699664 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/704d8383-2f51-4244-8a2a-3477cb15f23f-serving-cert\") pod \"etcd-operator-b45778765-2hpv7\" (UID: \"704d8383-2f51-4244-8a2a-3477cb15f23f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2hpv7" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.700094 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3eaaf4f5-59b0-4ab7-a865-e962b59f0584-auth-proxy-config\") pod \"machine-approver-56656f9798-d9vjp\" (UID: \"3eaaf4f5-59b0-4ab7-a865-e962b59f0584\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d9vjp" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.701908 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96c3697f-cf07-44a2-af83-c6aae61f04f9-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-67c5m\" (UID: \"96c3697f-cf07-44a2-af83-c6aae61f04f9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-67c5m" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.703573 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/96c3697f-cf07-44a2-af83-c6aae61f04f9-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-67c5m\" (UID: \"96c3697f-cf07-44a2-af83-c6aae61f04f9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-67c5m" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.703966 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33afcb8d-d045-4897-af65-56b622cdfa58-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6hgvx\" (UID: \"33afcb8d-d045-4897-af65-56b622cdfa58\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6hgvx" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.704005 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/198b8b13-3d25-4fbb-81af-a2a39186b64d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-cs4td\" (UID: \"198b8b13-3d25-4fbb-81af-a2a39186b64d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cs4td" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.704888 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-trusted-ca\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.705643 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3eaaf4f5-59b0-4ab7-a865-e962b59f0584-machine-approver-tls\") pod \"machine-approver-56656f9798-d9vjp\" (UID: \"3eaaf4f5-59b0-4ab7-a865-e962b59f0584\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d9vjp" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.706627 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-registry-certificates\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.707008 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/01b7a9e5-be6c-4a8e-9279-62eaf90e745d-trusted-ca\") pod \"ingress-operator-5b745b69d9-jhptj\" (UID: \"01b7a9e5-be6c-4a8e-9279-62eaf90e745d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jhptj" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.708433 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.708735 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.708809 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/198b8b13-3d25-4fbb-81af-a2a39186b64d-proxy-tls\") pod \"machine-config-controller-84d6567774-cs4td\" (UID: \"198b8b13-3d25-4fbb-81af-a2a39186b64d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cs4td" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.709813 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daffec68-fec5-4f3b-9302-4b736b09fc9c-serving-cert\") pod \"console-operator-58897d9998-qlr24\" (UID: \"daffec68-fec5-4f3b-9302-4b736b09fc9c\") " pod="openshift-console-operator/console-operator-58897d9998-qlr24" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.709953 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.710787 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0820aa13-f7b2-403e-9d85-1f940abae603-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-99vrx\" (UID: \"0820aa13-f7b2-403e-9d85-1f940abae603\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-99vrx" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.710685 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44180138-81cd-45b3-b14e-c21819b16645-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-fc942\" (UID: \"44180138-81cd-45b3-b14e-c21819b16645\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fc942" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.713156 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/01b7a9e5-be6c-4a8e-9279-62eaf90e745d-metrics-tls\") pod \"ingress-operator-5b745b69d9-jhptj\" (UID: \"01b7a9e5-be6c-4a8e-9279-62eaf90e745d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jhptj" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.714901 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ab6319ba-e125-4775-83c3-c5624951d634-default-certificate\") pod \"router-default-5444994796-22mnp\" (UID: \"ab6319ba-e125-4775-83c3-c5624951d634\") " pod="openshift-ingress/router-default-5444994796-22mnp" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.718555 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.722486 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqfw8\" (UniqueName: \"kubernetes.io/projected/198b8b13-3d25-4fbb-81af-a2a39186b64d-kube-api-access-bqfw8\") pod \"machine-config-controller-84d6567774-cs4td\" (UID: \"198b8b13-3d25-4fbb-81af-a2a39186b64d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cs4td" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.733645 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/33afcb8d-d045-4897-af65-56b622cdfa58-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6hgvx\" (UID: \"33afcb8d-d045-4897-af65-56b622cdfa58\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6hgvx" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.755001 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzlwf\" (UniqueName: \"kubernetes.io/projected/3eaaf4f5-59b0-4ab7-a865-e962b59f0584-kube-api-access-zzlwf\") pod \"machine-approver-56656f9798-d9vjp\" (UID: \"3eaaf4f5-59b0-4ab7-a865-e962b59f0584\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d9vjp" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.780228 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp9cx\" (UniqueName: \"kubernetes.io/projected/ab6319ba-e125-4775-83c3-c5624951d634-kube-api-access-vp9cx\") pod \"router-default-5444994796-22mnp\" (UID: \"ab6319ba-e125-4775-83c3-c5624951d634\") " pod="openshift-ingress/router-default-5444994796-22mnp" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.793515 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.793725 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8kmf\" (UniqueName: \"kubernetes.io/projected/55479c26-471b-4a9c-9d70-ec107786bbc4-kube-api-access-d8kmf\") pod \"machine-config-operator-74547568cd-tf2kg\" (UID: \"55479c26-471b-4a9c-9d70-ec107786bbc4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tf2kg" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.793751 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4fbe2538-0d5f-48c2-8819-7bb0386b2710-profile-collector-cert\") pod \"catalog-operator-68c6474976-s9mkm\" (UID: \"4fbe2538-0d5f-48c2-8819-7bb0386b2710\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9mkm" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.793769 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvr7l\" (UniqueName: \"kubernetes.io/projected/cb5c8374-6eb8-4247-97e3-ff94307782ef-kube-api-access-dvr7l\") pod \"olm-operator-6b444d44fb-x7b2m\" (UID: \"cb5c8374-6eb8-4247-97e3-ff94307782ef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7b2m" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.793787 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p82m7\" (UniqueName: \"kubernetes.io/projected/7eaffd03-b03a-491f-9bc3-250a1f9021e7-kube-api-access-p82m7\") pod \"service-ca-operator-777779d784-nqt58\" (UID: \"7eaffd03-b03a-491f-9bc3-250a1f9021e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nqt58" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.793809 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/55479c26-471b-4a9c-9d70-ec107786bbc4-proxy-tls\") pod \"machine-config-operator-74547568cd-tf2kg\" (UID: \"55479c26-471b-4a9c-9d70-ec107786bbc4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tf2kg" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.793828 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4sd5\" (UniqueName: \"kubernetes.io/projected/01c8d08c-1ad6-4048-92d4-98382da66cca-kube-api-access-f4sd5\") pod \"csi-hostpathplugin-tgngn\" (UID: \"01c8d08c-1ad6-4048-92d4-98382da66cca\") " pod="hostpath-provisioner/csi-hostpathplugin-tgngn" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.793879 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcwc6\" (UniqueName: \"kubernetes.io/projected/bd8d3bba-bf4e-4bda-94ff-ce2902b3299a-kube-api-access-kcwc6\") pod \"marketplace-operator-79b997595-zn9dk\" (UID: \"bd8d3bba-bf4e-4bda-94ff-ce2902b3299a\") " pod="openshift-marketplace/marketplace-operator-79b997595-zn9dk" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.793911 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cb5c8374-6eb8-4247-97e3-ff94307782ef-profile-collector-cert\") pod \"olm-operator-6b444d44fb-x7b2m\" (UID: \"cb5c8374-6eb8-4247-97e3-ff94307782ef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7b2m" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.793942 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f0084f7d-107a-484b-bc35-04f9585e0e2b-certs\") pod \"machine-config-server-446sw\" (UID: \"f0084f7d-107a-484b-bc35-04f9585e0e2b\") " pod="openshift-machine-config-operator/machine-config-server-446sw" Nov 25 11:38:52 crc kubenswrapper[4706]: E1125 11:38:52.794065 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:53.29402024 +0000 UTC m=+142.208577681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.794144 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z5nk\" (UniqueName: \"kubernetes.io/projected/7f36936f-00b7-4fde-9c95-8fb3433aba0a-kube-api-access-9z5nk\") pod \"migrator-59844c95c7-jg4ng\" (UID: \"7f36936f-00b7-4fde-9c95-8fb3433aba0a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jg4ng" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.794152 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7eaffd03-b03a-491f-9bc3-250a1f9021e7-config\") pod \"service-ca-operator-777779d784-nqt58\" (UID: \"7eaffd03-b03a-491f-9bc3-250a1f9021e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nqt58" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.794193 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/eea9f096-83bc-4f8c-b405-390011a0dd7e-signing-key\") pod \"service-ca-9c57cc56f-vpgtz\" (UID: \"eea9f096-83bc-4f8c-b405-390011a0dd7e\") " pod="openshift-service-ca/service-ca-9c57cc56f-vpgtz" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.794220 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3172a49-2bd1-4003-8ef0-560d4522e410-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rs94g\" (UID: \"d3172a49-2bd1-4003-8ef0-560d4522e410\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rs94g" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.794253 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv6t6\" (UniqueName: \"kubernetes.io/projected/4fbe2538-0d5f-48c2-8819-7bb0386b2710-kube-api-access-sv6t6\") pod \"catalog-operator-68c6474976-s9mkm\" (UID: \"4fbe2538-0d5f-48c2-8819-7bb0386b2710\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9mkm" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.794282 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/92d6e6ef-5880-4bdf-bdc5-5d2c4591a094-webhook-cert\") pod \"packageserver-d55dfcdfc-bthtj\" (UID: \"92d6e6ef-5880-4bdf-bdc5-5d2c4591a094\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bthtj" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.794321 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/01c8d08c-1ad6-4048-92d4-98382da66cca-plugins-dir\") pod \"csi-hostpathplugin-tgngn\" (UID: \"01c8d08c-1ad6-4048-92d4-98382da66cca\") " pod="hostpath-provisioner/csi-hostpathplugin-tgngn" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.794413 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f0084f7d-107a-484b-bc35-04f9585e0e2b-node-bootstrap-token\") pod \"machine-config-server-446sw\" (UID: \"f0084f7d-107a-484b-bc35-04f9585e0e2b\") " pod="openshift-machine-config-operator/machine-config-server-446sw" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.794478 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-954mw\" (UniqueName: \"kubernetes.io/projected/51a87a4e-3d58-48e0-b455-292aa206e149-kube-api-access-954mw\") pod \"collect-profiles-29401170-s4f7r\" (UID: \"51a87a4e-3d58-48e0-b455-292aa206e149\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401170-s4f7r" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.794500 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7eaffd03-b03a-491f-9bc3-250a1f9021e7-serving-cert\") pod \"service-ca-operator-777779d784-nqt58\" (UID: \"7eaffd03-b03a-491f-9bc3-250a1f9021e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nqt58" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.794524 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd8d3bba-bf4e-4bda-94ff-ce2902b3299a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zn9dk\" (UID: \"bd8d3bba-bf4e-4bda-94ff-ce2902b3299a\") " pod="openshift-marketplace/marketplace-operator-79b997595-zn9dk" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.794554 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51a87a4e-3d58-48e0-b455-292aa206e149-secret-volume\") pod \"collect-profiles-29401170-s4f7r\" (UID: \"51a87a4e-3d58-48e0-b455-292aa206e149\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401170-s4f7r" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.794624 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/01c8d08c-1ad6-4048-92d4-98382da66cca-mountpoint-dir\") pod \"csi-hostpathplugin-tgngn\" (UID: \"01c8d08c-1ad6-4048-92d4-98382da66cca\") " pod="hostpath-provisioner/csi-hostpathplugin-tgngn" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.794658 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bgl7\" (UniqueName: \"kubernetes.io/projected/f1ac94b4-787a-4778-8891-84b37d9e7565-kube-api-access-9bgl7\") pod \"ingress-canary-q466t\" (UID: \"f1ac94b4-787a-4778-8891-84b37d9e7565\") " pod="openshift-ingress-canary/ingress-canary-q466t" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.794687 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/916f095b-bd5f-497f-8771-aff8fd799255-config-volume\") pod \"dns-default-wswtg\" (UID: \"916f095b-bd5f-497f-8771-aff8fd799255\") " pod="openshift-dns/dns-default-wswtg" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.794735 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bd8d3bba-bf4e-4bda-94ff-ce2902b3299a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zn9dk\" (UID: \"bd8d3bba-bf4e-4bda-94ff-ce2902b3299a\") " pod="openshift-marketplace/marketplace-operator-79b997595-zn9dk" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.794797 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/01c8d08c-1ad6-4048-92d4-98382da66cca-socket-dir\") pod \"csi-hostpathplugin-tgngn\" (UID: \"01c8d08c-1ad6-4048-92d4-98382da66cca\") " pod="hostpath-provisioner/csi-hostpathplugin-tgngn" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.794862 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrvp9\" (UniqueName: \"kubernetes.io/projected/eea9f096-83bc-4f8c-b405-390011a0dd7e-kube-api-access-qrvp9\") pod \"service-ca-9c57cc56f-vpgtz\" (UID: \"eea9f096-83bc-4f8c-b405-390011a0dd7e\") " pod="openshift-service-ca/service-ca-9c57cc56f-vpgtz" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.794888 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqf6k\" (UniqueName: \"kubernetes.io/projected/f0084f7d-107a-484b-bc35-04f9585e0e2b-kube-api-access-zqf6k\") pod \"machine-config-server-446sw\" (UID: \"f0084f7d-107a-484b-bc35-04f9585e0e2b\") " pod="openshift-machine-config-operator/machine-config-server-446sw" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.794925 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/55479c26-471b-4a9c-9d70-ec107786bbc4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-tf2kg\" (UID: \"55479c26-471b-4a9c-9d70-ec107786bbc4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tf2kg" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.794978 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cb5c8374-6eb8-4247-97e3-ff94307782ef-srv-cert\") pod \"olm-operator-6b444d44fb-x7b2m\" (UID: \"cb5c8374-6eb8-4247-97e3-ff94307782ef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7b2m" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.795052 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/92d6e6ef-5880-4bdf-bdc5-5d2c4591a094-tmpfs\") pod \"packageserver-d55dfcdfc-bthtj\" (UID: \"92d6e6ef-5880-4bdf-bdc5-5d2c4591a094\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bthtj" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.795078 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/92d6e6ef-5880-4bdf-bdc5-5d2c4591a094-apiservice-cert\") pod \"packageserver-d55dfcdfc-bthtj\" (UID: \"92d6e6ef-5880-4bdf-bdc5-5d2c4591a094\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bthtj" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.795115 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/916f095b-bd5f-497f-8771-aff8fd799255-metrics-tls\") pod \"dns-default-wswtg\" (UID: \"916f095b-bd5f-497f-8771-aff8fd799255\") " pod="openshift-dns/dns-default-wswtg" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.795148 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51a87a4e-3d58-48e0-b455-292aa206e149-config-volume\") pod \"collect-profiles-29401170-s4f7r\" (UID: \"51a87a4e-3d58-48e0-b455-292aa206e149\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401170-s4f7r" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.795172 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/01c8d08c-1ad6-4048-92d4-98382da66cca-csi-data-dir\") pod \"csi-hostpathplugin-tgngn\" (UID: \"01c8d08c-1ad6-4048-92d4-98382da66cca\") " pod="hostpath-provisioner/csi-hostpathplugin-tgngn" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.795218 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/55479c26-471b-4a9c-9d70-ec107786bbc4-images\") pod \"machine-config-operator-74547568cd-tf2kg\" (UID: \"55479c26-471b-4a9c-9d70-ec107786bbc4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tf2kg" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.795212 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7eaffd03-b03a-491f-9bc3-250a1f9021e7-config\") pod \"service-ca-operator-777779d784-nqt58\" (UID: \"7eaffd03-b03a-491f-9bc3-250a1f9021e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nqt58" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.795240 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv7qm\" (UniqueName: \"kubernetes.io/projected/916f095b-bd5f-497f-8771-aff8fd799255-kube-api-access-kv7qm\") pod \"dns-default-wswtg\" (UID: \"916f095b-bd5f-497f-8771-aff8fd799255\") " pod="openshift-dns/dns-default-wswtg" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.795259 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f1ac94b4-787a-4778-8891-84b37d9e7565-cert\") pod \"ingress-canary-q466t\" (UID: \"f1ac94b4-787a-4778-8891-84b37d9e7565\") " pod="openshift-ingress-canary/ingress-canary-q466t" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.795278 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/eea9f096-83bc-4f8c-b405-390011a0dd7e-signing-cabundle\") pod \"service-ca-9c57cc56f-vpgtz\" (UID: \"eea9f096-83bc-4f8c-b405-390011a0dd7e\") " pod="openshift-service-ca/service-ca-9c57cc56f-vpgtz" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.796214 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/55479c26-471b-4a9c-9d70-ec107786bbc4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-tf2kg\" (UID: \"55479c26-471b-4a9c-9d70-ec107786bbc4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tf2kg" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.797570 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd8d3bba-bf4e-4bda-94ff-ce2902b3299a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zn9dk\" (UID: \"bd8d3bba-bf4e-4bda-94ff-ce2902b3299a\") " pod="openshift-marketplace/marketplace-operator-79b997595-zn9dk" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.797835 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/92d6e6ef-5880-4bdf-bdc5-5d2c4591a094-tmpfs\") pod \"packageserver-d55dfcdfc-bthtj\" (UID: \"92d6e6ef-5880-4bdf-bdc5-5d2c4591a094\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bthtj" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.798115 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/eea9f096-83bc-4f8c-b405-390011a0dd7e-signing-cabundle\") pod \"service-ca-9c57cc56f-vpgtz\" (UID: \"eea9f096-83bc-4f8c-b405-390011a0dd7e\") " pod="openshift-service-ca/service-ca-9c57cc56f-vpgtz" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.798775 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51a87a4e-3d58-48e0-b455-292aa206e149-config-volume\") pod \"collect-profiles-29401170-s4f7r\" (UID: \"51a87a4e-3d58-48e0-b455-292aa206e149\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401170-s4f7r" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.802251 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/eea9f096-83bc-4f8c-b405-390011a0dd7e-signing-key\") pod \"service-ca-9c57cc56f-vpgtz\" (UID: \"eea9f096-83bc-4f8c-b405-390011a0dd7e\") " pod="openshift-service-ca/service-ca-9c57cc56f-vpgtz" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.802422 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cb5c8374-6eb8-4247-97e3-ff94307782ef-profile-collector-cert\") pod \"olm-operator-6b444d44fb-x7b2m\" (UID: \"cb5c8374-6eb8-4247-97e3-ff94307782ef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7b2m" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.802540 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cb8f2779-a7df-4ead-a209-9e8024e20647-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fh2jc\" (UID: \"cb8f2779-a7df-4ead-a209-9e8024e20647\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fh2jc" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.802961 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4fbe2538-0d5f-48c2-8819-7bb0386b2710-srv-cert\") pod \"catalog-operator-68c6474976-s9mkm\" (UID: \"4fbe2538-0d5f-48c2-8819-7bb0386b2710\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9mkm" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.802998 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/01c8d08c-1ad6-4048-92d4-98382da66cca-registration-dir\") pod \"csi-hostpathplugin-tgngn\" (UID: \"01c8d08c-1ad6-4048-92d4-98382da66cca\") " pod="hostpath-provisioner/csi-hostpathplugin-tgngn" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.803029 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqsnq\" (UniqueName: \"kubernetes.io/projected/92d6e6ef-5880-4bdf-bdc5-5d2c4591a094-kube-api-access-vqsnq\") pod \"packageserver-d55dfcdfc-bthtj\" (UID: \"92d6e6ef-5880-4bdf-bdc5-5d2c4591a094\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bthtj" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.803057 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpgpj\" (UniqueName: \"kubernetes.io/projected/d3172a49-2bd1-4003-8ef0-560d4522e410-kube-api-access-fpgpj\") pod \"package-server-manager-789f6589d5-rs94g\" (UID: \"d3172a49-2bd1-4003-8ef0-560d4522e410\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rs94g" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.805097 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/55479c26-471b-4a9c-9d70-ec107786bbc4-images\") pod \"machine-config-operator-74547568cd-tf2kg\" (UID: \"55479c26-471b-4a9c-9d70-ec107786bbc4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tf2kg" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.806547 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/01c8d08c-1ad6-4048-92d4-98382da66cca-registration-dir\") pod \"csi-hostpathplugin-tgngn\" (UID: \"01c8d08c-1ad6-4048-92d4-98382da66cca\") " pod="hostpath-provisioner/csi-hostpathplugin-tgngn" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.806709 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/01c8d08c-1ad6-4048-92d4-98382da66cca-plugins-dir\") pod \"csi-hostpathplugin-tgngn\" (UID: \"01c8d08c-1ad6-4048-92d4-98382da66cca\") " pod="hostpath-provisioner/csi-hostpathplugin-tgngn" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.807190 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjh2n\" (UniqueName: \"kubernetes.io/projected/cb8f2779-a7df-4ead-a209-9e8024e20647-kube-api-access-pjh2n\") pod \"multus-admission-controller-857f4d67dd-fh2jc\" (UID: \"cb8f2779-a7df-4ead-a209-9e8024e20647\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fh2jc" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.807383 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/01c8d08c-1ad6-4048-92d4-98382da66cca-mountpoint-dir\") pod \"csi-hostpathplugin-tgngn\" (UID: \"01c8d08c-1ad6-4048-92d4-98382da66cca\") " pod="hostpath-provisioner/csi-hostpathplugin-tgngn" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.807649 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/01c8d08c-1ad6-4048-92d4-98382da66cca-socket-dir\") pod \"csi-hostpathplugin-tgngn\" (UID: \"01c8d08c-1ad6-4048-92d4-98382da66cca\") " pod="hostpath-provisioner/csi-hostpathplugin-tgngn" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.808424 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/01c8d08c-1ad6-4048-92d4-98382da66cca-csi-data-dir\") pod \"csi-hostpathplugin-tgngn\" (UID: \"01c8d08c-1ad6-4048-92d4-98382da66cca\") " pod="hostpath-provisioner/csi-hostpathplugin-tgngn" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.809722 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/916f095b-bd5f-497f-8771-aff8fd799255-config-volume\") pod \"dns-default-wswtg\" (UID: \"916f095b-bd5f-497f-8771-aff8fd799255\") " pod="openshift-dns/dns-default-wswtg" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.815021 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7eaffd03-b03a-491f-9bc3-250a1f9021e7-serving-cert\") pod \"service-ca-operator-777779d784-nqt58\" (UID: \"7eaffd03-b03a-491f-9bc3-250a1f9021e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nqt58" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.819931 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/55479c26-471b-4a9c-9d70-ec107786bbc4-proxy-tls\") pod \"machine-config-operator-74547568cd-tf2kg\" (UID: \"55479c26-471b-4a9c-9d70-ec107786bbc4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tf2kg" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.820567 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f0084f7d-107a-484b-bc35-04f9585e0e2b-certs\") pod \"machine-config-server-446sw\" (UID: \"f0084f7d-107a-484b-bc35-04f9585e0e2b\") " pod="openshift-machine-config-operator/machine-config-server-446sw" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.821157 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/92d6e6ef-5880-4bdf-bdc5-5d2c4591a094-apiservice-cert\") pod \"packageserver-d55dfcdfc-bthtj\" (UID: \"92d6e6ef-5880-4bdf-bdc5-5d2c4591a094\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bthtj" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.821867 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f1ac94b4-787a-4778-8891-84b37d9e7565-cert\") pod \"ingress-canary-q466t\" (UID: \"f1ac94b4-787a-4778-8891-84b37d9e7565\") " pod="openshift-ingress-canary/ingress-canary-q466t" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.822777 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cb5c8374-6eb8-4247-97e3-ff94307782ef-srv-cert\") pod \"olm-operator-6b444d44fb-x7b2m\" (UID: \"cb5c8374-6eb8-4247-97e3-ff94307782ef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7b2m" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.823220 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f0084f7d-107a-484b-bc35-04f9585e0e2b-node-bootstrap-token\") pod \"machine-config-server-446sw\" (UID: \"f0084f7d-107a-484b-bc35-04f9585e0e2b\") " pod="openshift-machine-config-operator/machine-config-server-446sw" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.825098 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/916f095b-bd5f-497f-8771-aff8fd799255-metrics-tls\") pod \"dns-default-wswtg\" (UID: \"916f095b-bd5f-497f-8771-aff8fd799255\") " pod="openshift-dns/dns-default-wswtg" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.827243 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw47q\" (UniqueName: \"kubernetes.io/projected/01b7a9e5-be6c-4a8e-9279-62eaf90e745d-kube-api-access-tw47q\") pod \"ingress-operator-5b745b69d9-jhptj\" (UID: \"01b7a9e5-be6c-4a8e-9279-62eaf90e745d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jhptj" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.836962 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bd8d3bba-bf4e-4bda-94ff-ce2902b3299a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zn9dk\" (UID: \"bd8d3bba-bf4e-4bda-94ff-ce2902b3299a\") " pod="openshift-marketplace/marketplace-operator-79b997595-zn9dk" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.837107 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4fbe2538-0d5f-48c2-8819-7bb0386b2710-profile-collector-cert\") pod \"catalog-operator-68c6474976-s9mkm\" (UID: \"4fbe2538-0d5f-48c2-8819-7bb0386b2710\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9mkm" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.837832 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/92d6e6ef-5880-4bdf-bdc5-5d2c4591a094-webhook-cert\") pod \"packageserver-d55dfcdfc-bthtj\" (UID: \"92d6e6ef-5880-4bdf-bdc5-5d2c4591a094\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bthtj" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.837927 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcv28\" (UniqueName: \"kubernetes.io/projected/239de662-d89b-4e6e-a970-56811041192f-kube-api-access-dcv28\") pod \"oauth-openshift-558db77b4-ss2xd\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.838432 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3172a49-2bd1-4003-8ef0-560d4522e410-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rs94g\" (UID: \"d3172a49-2bd1-4003-8ef0-560d4522e410\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rs94g" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.840822 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cb8f2779-a7df-4ead-a209-9e8024e20647-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fh2jc\" (UID: \"cb8f2779-a7df-4ead-a209-9e8024e20647\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fh2jc" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.841144 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4fbe2538-0d5f-48c2-8819-7bb0386b2710-srv-cert\") pod \"catalog-operator-68c6474976-s9mkm\" (UID: \"4fbe2538-0d5f-48c2-8819-7bb0386b2710\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9mkm" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.841515 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-67c5m" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.842504 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr"] Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.846124 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51a87a4e-3d58-48e0-b455-292aa206e149-secret-volume\") pod \"collect-profiles-29401170-s4f7r\" (UID: \"51a87a4e-3d58-48e0-b455-292aa206e149\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401170-s4f7r" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.850254 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6hgvx" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.854604 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44180138-81cd-45b3-b14e-c21819b16645-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-fc942\" (UID: \"44180138-81cd-45b3-b14e-c21819b16645\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fc942" Nov 25 11:38:52 crc kubenswrapper[4706]: W1125 11:38:52.857252 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6ce79ff_bc51_4375_bd97_7e6ba29f263d.slice/crio-393ed2e9e4a36ce8c1350c426048e9e1f13377ef219653233235b10c327900cc WatchSource:0}: Error finding container 393ed2e9e4a36ce8c1350c426048e9e1f13377ef219653233235b10c327900cc: Status 404 returned error can't find the container with id 393ed2e9e4a36ce8c1350c426048e9e1f13377ef219653233235b10c327900cc Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.861939 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-22mnp" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.863652 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cs4td" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.877407 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jg4ng" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.879508 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krn2d\" (UniqueName: \"kubernetes.io/projected/825f088d-44aa-4f48-b95d-6245da5b1775-kube-api-access-krn2d\") pod \"control-plane-machine-set-operator-78cbb6b69f-hhh7q\" (UID: \"825f088d-44aa-4f48-b95d-6245da5b1775\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hhh7q" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.913496 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:52 crc kubenswrapper[4706]: E1125 11:38:52.914192 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:53.414162774 +0000 UTC m=+142.328720155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.915933 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6qlz\" (UniqueName: \"kubernetes.io/projected/daffec68-fec5-4f3b-9302-4b736b09fc9c-kube-api-access-h6qlz\") pod \"console-operator-58897d9998-qlr24\" (UID: \"daffec68-fec5-4f3b-9302-4b736b09fc9c\") " pod="openshift-console-operator/console-operator-58897d9998-qlr24" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.944699 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-bound-sa-token\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:52 crc kubenswrapper[4706]: W1125 11:38:52.962200 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab6319ba_e125_4775_83c3_c5624951d634.slice/crio-43160d9cd2fe77b7580b8f492c4498ee883eab63ff419ec9f9a64edb9259ebc6 WatchSource:0}: Error finding container 43160d9cd2fe77b7580b8f492c4498ee883eab63ff419ec9f9a64edb9259ebc6: Status 404 returned error can't find the container with id 43160d9cd2fe77b7580b8f492c4498ee883eab63ff419ec9f9a64edb9259ebc6 Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.970534 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5shb\" (UniqueName: \"kubernetes.io/projected/0820aa13-f7b2-403e-9d85-1f940abae603-kube-api-access-w5shb\") pod \"kube-storage-version-migrator-operator-b67b599dd-99vrx\" (UID: \"0820aa13-f7b2-403e-9d85-1f940abae603\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-99vrx" Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.982220 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jd66x"] Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.989004 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jq6ck"] Nov 25 11:38:52 crc kubenswrapper[4706]: I1125 11:38:52.997134 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wstr\" (UniqueName: \"kubernetes.io/projected/704d8383-2f51-4244-8a2a-3477cb15f23f-kube-api-access-8wstr\") pod \"etcd-operator-b45778765-2hpv7\" (UID: \"704d8383-2f51-4244-8a2a-3477cb15f23f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2hpv7" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.004377 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45hd5\" (UniqueName: \"kubernetes.io/projected/7cd3b65b-a0b4-4cee-87ac-23925d36acb8-kube-api-access-45hd5\") pod \"dns-operator-744455d44c-mnv7h\" (UID: \"7cd3b65b-a0b4-4cee-87ac-23925d36acb8\") " pod="openshift-dns-operator/dns-operator-744455d44c-mnv7h" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.015010 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:53 crc kubenswrapper[4706]: E1125 11:38:53.015444 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:53.515392053 +0000 UTC m=+142.429949434 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.016084 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:53 crc kubenswrapper[4706]: E1125 11:38:53.017753 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:53.517734876 +0000 UTC m=+142.432292257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.023482 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/01b7a9e5-be6c-4a8e-9279-62eaf90e745d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jhptj\" (UID: \"01b7a9e5-be6c-4a8e-9279-62eaf90e745d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jhptj" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.041091 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hfdx\" (UniqueName: \"kubernetes.io/projected/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-kube-api-access-5hfdx\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.047553 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d9vjp" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.055926 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-qlr24" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.060846 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-9z28x"] Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.064932 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p82m7\" (UniqueName: \"kubernetes.io/projected/7eaffd03-b03a-491f-9bc3-250a1f9021e7-kube-api-access-p82m7\") pod \"service-ca-operator-777779d784-nqt58\" (UID: \"7eaffd03-b03a-491f-9bc3-250a1f9021e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nqt58" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.075770 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svsw6"] Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.075803 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:53 crc kubenswrapper[4706]: W1125 11:38:53.078585 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb17dbfb_8a35_405a_9f44_044252ee8eb4.slice/crio-bb42d195205e3763712620568463ee037606a519c0f35ef4a032a8846ea7ea3a WatchSource:0}: Error finding container bb42d195205e3763712620568463ee037606a519c0f35ef4a032a8846ea7ea3a: Status 404 returned error can't find the container with id bb42d195205e3763712620568463ee037606a519c0f35ef4a032a8846ea7ea3a Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.080069 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvr7l\" (UniqueName: \"kubernetes.io/projected/cb5c8374-6eb8-4247-97e3-ff94307782ef-kube-api-access-dvr7l\") pod \"olm-operator-6b444d44fb-x7b2m\" (UID: \"cb5c8374-6eb8-4247-97e3-ff94307782ef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7b2m" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.095078 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jhptj" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.099134 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8kmf\" (UniqueName: \"kubernetes.io/projected/55479c26-471b-4a9c-9d70-ec107786bbc4-kube-api-access-d8kmf\") pod \"machine-config-operator-74547568cd-tf2kg\" (UID: \"55479c26-471b-4a9c-9d70-ec107786bbc4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tf2kg" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.101694 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-67c5m"] Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.119546 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.120322 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mnv7h" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.120719 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fc942" Nov 25 11:38:53 crc kubenswrapper[4706]: E1125 11:38:53.120979 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:53.62096138 +0000 UTC m=+142.535518761 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.122489 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-2hpv7" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.126386 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-99vrx" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.140760 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4sd5\" (UniqueName: \"kubernetes.io/projected/01c8d08c-1ad6-4048-92d4-98382da66cca-kube-api-access-f4sd5\") pod \"csi-hostpathplugin-tgngn\" (UID: \"01c8d08c-1ad6-4048-92d4-98382da66cca\") " pod="hostpath-provisioner/csi-hostpathplugin-tgngn" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.144602 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqf6k\" (UniqueName: \"kubernetes.io/projected/f0084f7d-107a-484b-bc35-04f9585e0e2b-kube-api-access-zqf6k\") pod \"machine-config-server-446sw\" (UID: \"f0084f7d-107a-484b-bc35-04f9585e0e2b\") " pod="openshift-machine-config-operator/machine-config-server-446sw" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.173737 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hhh7q" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.187045 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcwc6\" (UniqueName: \"kubernetes.io/projected/bd8d3bba-bf4e-4bda-94ff-ce2902b3299a-kube-api-access-kcwc6\") pod \"marketplace-operator-79b997595-zn9dk\" (UID: \"bd8d3bba-bf4e-4bda-94ff-ce2902b3299a\") " pod="openshift-marketplace/marketplace-operator-79b997595-zn9dk" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.189894 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6hgvx"] Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.192922 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tf2kg" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.202614 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zn9dk" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.210492 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrvp9\" (UniqueName: \"kubernetes.io/projected/eea9f096-83bc-4f8c-b405-390011a0dd7e-kube-api-access-qrvp9\") pod \"service-ca-9c57cc56f-vpgtz\" (UID: \"eea9f096-83bc-4f8c-b405-390011a0dd7e\") " pod="openshift-service-ca/service-ca-9c57cc56f-vpgtz" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.217771 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv7qm\" (UniqueName: \"kubernetes.io/projected/916f095b-bd5f-497f-8771-aff8fd799255-kube-api-access-kv7qm\") pod \"dns-default-wswtg\" (UID: \"916f095b-bd5f-497f-8771-aff8fd799255\") " pod="openshift-dns/dns-default-wswtg" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.217867 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nqt58" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.224264 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.224579 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvz6z\" (UniqueName: \"kubernetes.io/projected/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-kube-api-access-bvz6z\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:53 crc kubenswrapper[4706]: E1125 11:38:53.225447 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:53.725430978 +0000 UTC m=+142.639988359 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.233670 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvz6z\" (UniqueName: \"kubernetes.io/projected/d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a-kube-api-access-bvz6z\") pod \"apiserver-76f77b778f-jsj27\" (UID: \"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.240131 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7b2m" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.254577 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-vpgtz" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.256367 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqsnq\" (UniqueName: \"kubernetes.io/projected/92d6e6ef-5880-4bdf-bdc5-5d2c4591a094-kube-api-access-vqsnq\") pod \"packageserver-d55dfcdfc-bthtj\" (UID: \"92d6e6ef-5880-4bdf-bdc5-5d2c4591a094\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bthtj" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.259110 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpgpj\" (UniqueName: \"kubernetes.io/projected/d3172a49-2bd1-4003-8ef0-560d4522e410-kube-api-access-fpgpj\") pod \"package-server-manager-789f6589d5-rs94g\" (UID: \"d3172a49-2bd1-4003-8ef0-560d4522e410\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rs94g" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.281870 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv6t6\" (UniqueName: \"kubernetes.io/projected/4fbe2538-0d5f-48c2-8819-7bb0386b2710-kube-api-access-sv6t6\") pod \"catalog-operator-68c6474976-s9mkm\" (UID: \"4fbe2538-0d5f-48c2-8819-7bb0386b2710\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9mkm" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.295763 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-tgngn" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.303083 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bgl7\" (UniqueName: \"kubernetes.io/projected/f1ac94b4-787a-4778-8891-84b37d9e7565-kube-api-access-9bgl7\") pod \"ingress-canary-q466t\" (UID: \"f1ac94b4-787a-4778-8891-84b37d9e7565\") " pod="openshift-ingress-canary/ingress-canary-q466t" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.303495 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-q466t" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.308915 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-446sw" Nov 25 11:38:53 crc kubenswrapper[4706]: W1125 11:38:53.309736 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33afcb8d_d045_4897_af65_56b622cdfa58.slice/crio-e0f7e94eb5b4bb57efb6b491e0bdf730d014592b43b9280f567e97f1259cd5b1 WatchSource:0}: Error finding container e0f7e94eb5b4bb57efb6b491e0bdf730d014592b43b9280f567e97f1259cd5b1: Status 404 returned error can't find the container with id e0f7e94eb5b4bb57efb6b491e0bdf730d014592b43b9280f567e97f1259cd5b1 Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.304268 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-954mw\" (UniqueName: \"kubernetes.io/projected/51a87a4e-3d58-48e0-b455-292aa206e149-kube-api-access-954mw\") pod \"collect-profiles-29401170-s4f7r\" (UID: \"51a87a4e-3d58-48e0-b455-292aa206e149\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401170-s4f7r" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.315917 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wswtg" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.317234 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjh2n\" (UniqueName: \"kubernetes.io/projected/cb8f2779-a7df-4ead-a209-9e8024e20647-kube-api-access-pjh2n\") pod \"multus-admission-controller-857f4d67dd-fh2jc\" (UID: \"cb8f2779-a7df-4ead-a209-9e8024e20647\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fh2jc" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.328616 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:53 crc kubenswrapper[4706]: E1125 11:38:53.329605 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:53.829567856 +0000 UTC m=+142.744125317 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.429774 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:53 crc kubenswrapper[4706]: E1125 11:38:53.430246 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:53.930225541 +0000 UTC m=+142.844782922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.431017 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-cs4td"] Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.434377 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:38:53 crc kubenswrapper[4706]: W1125 11:38:53.446772 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3eaaf4f5_59b0_4ab7_a865_e962b59f0584.slice/crio-2fb6f9e968611f9b07aa9bae9619147f0030cc2ec5fc4d288cdc2edaf8e45d63 WatchSource:0}: Error finding container 2fb6f9e968611f9b07aa9bae9619147f0030cc2ec5fc4d288cdc2edaf8e45d63: Status 404 returned error can't find the container with id 2fb6f9e968611f9b07aa9bae9619147f0030cc2ec5fc4d288cdc2edaf8e45d63 Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.485781 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-fh2jc" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.505052 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-jg4ng"] Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.509414 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rs94g" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.523160 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bthtj" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.530049 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9mkm" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.532126 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:53 crc kubenswrapper[4706]: E1125 11:38:53.532639 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:54.032616782 +0000 UTC m=+142.947174163 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.561650 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ss2xd"] Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.573610 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401170-s4f7r" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.604612 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-qlr24"] Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.630556 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-67c5m" event={"ID":"96c3697f-cf07-44a2-af83-c6aae61f04f9","Type":"ContainerStarted","Data":"0a3cc271e093c56909173f1f5195d889d4077b5cc97d385d3ce0372c7ab5667a"} Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.634256 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:53 crc kubenswrapper[4706]: E1125 11:38:53.634727 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:54.134711945 +0000 UTC m=+143.049269326 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.653505 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jq6ck" event={"ID":"bb17dbfb-8a35-405a-9f44-044252ee8eb4","Type":"ContainerStarted","Data":"bb42d195205e3763712620568463ee037606a519c0f35ef4a032a8846ea7ea3a"} Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.672601 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-22mnp" event={"ID":"ab6319ba-e125-4775-83c3-c5624951d634","Type":"ContainerStarted","Data":"43160d9cd2fe77b7580b8f492c4498ee883eab63ff419ec9f9a64edb9259ebc6"} Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.674486 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svsw6" event={"ID":"49757df3-88b5-4706-8010-139ffb01f41a","Type":"ContainerStarted","Data":"1981371cb3a06bec3a15e299ca32fc484e2888dbf23cfbf91134ac81ef253e4d"} Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.690417 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8f48m" event={"ID":"028d4ff3-870d-4002-843f-5381587e28fc","Type":"ContainerStarted","Data":"8775e9a8f2126da2322f21e9e41b07221c4efa4814080ba886ee52fd5307941f"} Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.690463 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8f48m" event={"ID":"028d4ff3-870d-4002-843f-5381587e28fc","Type":"ContainerStarted","Data":"c93f402d83d190e7bda96e6580d611d46d04715cb47032ce3fbc7cf8603b61e8"} Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.712761 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr" event={"ID":"f6ce79ff-bc51-4375-bd97-7e6ba29f263d","Type":"ContainerStarted","Data":"393ed2e9e4a36ce8c1350c426048e9e1f13377ef219653233235b10c327900cc"} Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.714019 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jhptj"] Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.718518 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d9vjp" event={"ID":"3eaaf4f5-59b0-4ab7-a865-e962b59f0584","Type":"ContainerStarted","Data":"2fb6f9e968611f9b07aa9bae9619147f0030cc2ec5fc4d288cdc2edaf8e45d63"} Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.722015 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-9z28x" event={"ID":"ab2dd029-844e-4783-8fda-bfab6a6d9243","Type":"ContainerStarted","Data":"0463d57389297a35cd75870388e5f50199a03f4f3e2bb7a3aa1e560e59f54365"} Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.723827 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6hgvx" event={"ID":"33afcb8d-d045-4897-af65-56b622cdfa58","Type":"ContainerStarted","Data":"e0f7e94eb5b4bb57efb6b491e0bdf730d014592b43b9280f567e97f1259cd5b1"} Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.729938 4706 generic.go:334] "Generic (PLEG): container finished" podID="09d713da-8021-4bfa-b39d-bc3399593865" containerID="cb54f28513e7106ce41289c2f91d74a051131056cc1910f1f95be0b759d2e127" exitCode=0 Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.730011 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6nqn" event={"ID":"09d713da-8021-4bfa-b39d-bc3399593865","Type":"ContainerDied","Data":"cb54f28513e7106ce41289c2f91d74a051131056cc1910f1f95be0b759d2e127"} Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.735438 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jd66x" event={"ID":"bf1352d3-1ee8-4c51-8f45-b9fd8354fd07","Type":"ContainerStarted","Data":"1c03b0bf5ec3e75c71f16c9d0a720b4a74d866de9091c7cfd50fdd179886d9d9"} Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.738393 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:53 crc kubenswrapper[4706]: E1125 11:38:53.739495 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:54.239479691 +0000 UTC m=+143.154037072 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.844410 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:53 crc kubenswrapper[4706]: E1125 11:38:53.845067 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:54.344967806 +0000 UTC m=+143.259525207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.845068 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mnv7h"] Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.920831 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-99vrx"] Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.937241 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j7x2j" Nov 25 11:38:53 crc kubenswrapper[4706]: W1125 11:38:53.942848 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f36936f_00b7_4fde_9c95_8fb3433aba0a.slice/crio-fba9a3cc85e45cdda15d91e35ee4ccdc7f68061fb3046415ec2bb38d67b9750f WatchSource:0}: Error finding container fba9a3cc85e45cdda15d91e35ee4ccdc7f68061fb3046415ec2bb38d67b9750f: Status 404 returned error can't find the container with id fba9a3cc85e45cdda15d91e35ee4ccdc7f68061fb3046415ec2bb38d67b9750f Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.946622 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-zf4pd" Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.947076 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:53 crc kubenswrapper[4706]: E1125 11:38:53.947265 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:54.447227684 +0000 UTC m=+143.361785075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.947360 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:53 crc kubenswrapper[4706]: E1125 11:38:53.947808 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:54.447792539 +0000 UTC m=+143.362349920 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:53 crc kubenswrapper[4706]: I1125 11:38:53.965914 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fc942"] Nov 25 11:38:54 crc kubenswrapper[4706]: I1125 11:38:54.048162 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:54 crc kubenswrapper[4706]: E1125 11:38:54.049464 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:54.54942802 +0000 UTC m=+143.463985421 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:54 crc kubenswrapper[4706]: W1125 11:38:54.054293 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01b7a9e5_be6c_4a8e_9279_62eaf90e745d.slice/crio-6bef3e0ed2a80bed84c516c32f54ee3294718c9d55ddc493ea04c5f5b70b1fa6 WatchSource:0}: Error finding container 6bef3e0ed2a80bed84c516c32f54ee3294718c9d55ddc493ea04c5f5b70b1fa6: Status 404 returned error can't find the container with id 6bef3e0ed2a80bed84c516c32f54ee3294718c9d55ddc493ea04c5f5b70b1fa6 Nov 25 11:38:54 crc kubenswrapper[4706]: W1125 11:38:54.064796 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cd3b65b_a0b4_4cee_87ac_23925d36acb8.slice/crio-7ce3cd6d25691c495d9f607d76e686324a88a70bfdd79a291a80838565895662 WatchSource:0}: Error finding container 7ce3cd6d25691c495d9f607d76e686324a88a70bfdd79a291a80838565895662: Status 404 returned error can't find the container with id 7ce3cd6d25691c495d9f607d76e686324a88a70bfdd79a291a80838565895662 Nov 25 11:38:54 crc kubenswrapper[4706]: W1125 11:38:54.071919 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44180138_81cd_45b3_b14e_c21819b16645.slice/crio-bb165cbfca64e26356ecc2ffed1c87cc8c2d9eadd719d78024c33ac505b2dd94 WatchSource:0}: Error finding container bb165cbfca64e26356ecc2ffed1c87cc8c2d9eadd719d78024c33ac505b2dd94: Status 404 returned error can't find the container with id bb165cbfca64e26356ecc2ffed1c87cc8c2d9eadd719d78024c33ac505b2dd94 Nov 25 11:38:54 crc kubenswrapper[4706]: I1125 11:38:54.154402 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:54 crc kubenswrapper[4706]: E1125 11:38:54.154824 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:54.654807332 +0000 UTC m=+143.569364713 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:54 crc kubenswrapper[4706]: I1125 11:38:54.262533 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:54 crc kubenswrapper[4706]: E1125 11:38:54.263064 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:54.763040932 +0000 UTC m=+143.677598313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:54 crc kubenswrapper[4706]: I1125 11:38:54.263196 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8rnp5" podStartSLOduration=122.263173436 podStartE2EDuration="2m2.263173436s" podCreationTimestamp="2025-11-25 11:36:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:54.261418258 +0000 UTC m=+143.175975649" watchObservedRunningTime="2025-11-25 11:38:54.263173436 +0000 UTC m=+143.177730817" Nov 25 11:38:54 crc kubenswrapper[4706]: I1125 11:38:54.285637 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hhh7q"] Nov 25 11:38:54 crc kubenswrapper[4706]: I1125 11:38:54.368740 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:54 crc kubenswrapper[4706]: E1125 11:38:54.372871 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:54.872842565 +0000 UTC m=+143.787399946 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:54 crc kubenswrapper[4706]: I1125 11:38:54.402910 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-qm76l" podStartSLOduration=122.402877801 podStartE2EDuration="2m2.402877801s" podCreationTimestamp="2025-11-25 11:36:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:54.398890582 +0000 UTC m=+143.313447963" watchObservedRunningTime="2025-11-25 11:38:54.402877801 +0000 UTC m=+143.317435192" Nov 25 11:38:54 crc kubenswrapper[4706]: I1125 11:38:54.476338 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:54 crc kubenswrapper[4706]: E1125 11:38:54.476597 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:54.976568762 +0000 UTC m=+143.891126143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:54 crc kubenswrapper[4706]: I1125 11:38:54.476719 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:54 crc kubenswrapper[4706]: E1125 11:38:54.477384 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:54.977354184 +0000 UTC m=+143.891911735 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:54 crc kubenswrapper[4706]: I1125 11:38:54.578097 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:54 crc kubenswrapper[4706]: E1125 11:38:54.578694 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:55.078667235 +0000 UTC m=+143.993224616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:54 crc kubenswrapper[4706]: I1125 11:38:54.583516 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j7x2j" podStartSLOduration=121.583487336 podStartE2EDuration="2m1.583487336s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:54.583002563 +0000 UTC m=+143.497559944" watchObservedRunningTime="2025-11-25 11:38:54.583487336 +0000 UTC m=+143.498044717" Nov 25 11:38:54 crc kubenswrapper[4706]: I1125 11:38:54.598617 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2hpv7"] Nov 25 11:38:54 crc kubenswrapper[4706]: I1125 11:38:54.663593 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zn9dk"] Nov 25 11:38:54 crc kubenswrapper[4706]: I1125 11:38:54.682015 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:54 crc kubenswrapper[4706]: E1125 11:38:54.682613 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:55.182590308 +0000 UTC m=+144.097147699 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:54 crc kubenswrapper[4706]: I1125 11:38:54.767242 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-tf2kg"] Nov 25 11:38:54 crc kubenswrapper[4706]: I1125 11:38:54.777595 4706 generic.go:334] "Generic (PLEG): container finished" podID="f6ce79ff-bc51-4375-bd97-7e6ba29f263d" containerID="012fbaad72d781f0e9c2447c13d44b2cdbb02959422cc2a2924b888c600a5591" exitCode=0 Nov 25 11:38:54 crc kubenswrapper[4706]: I1125 11:38:54.778157 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr" event={"ID":"f6ce79ff-bc51-4375-bd97-7e6ba29f263d","Type":"ContainerDied","Data":"012fbaad72d781f0e9c2447c13d44b2cdbb02959422cc2a2924b888c600a5591"} Nov 25 11:38:54 crc kubenswrapper[4706]: I1125 11:38:54.783434 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:54 crc kubenswrapper[4706]: E1125 11:38:54.783887 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:55.283863709 +0000 UTC m=+144.198421090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:54 crc kubenswrapper[4706]: I1125 11:38:54.787797 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cs4td" event={"ID":"198b8b13-3d25-4fbb-81af-a2a39186b64d","Type":"ContainerStarted","Data":"d7a7ce5dd9540ec213677e3b716b86b01c260da2ce149d02663b0019d3ecf4d0"} Nov 25 11:38:54 crc kubenswrapper[4706]: I1125 11:38:54.796446 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-9z28x" event={"ID":"ab2dd029-844e-4783-8fda-bfab6a6d9243","Type":"ContainerStarted","Data":"58a08f3709a52aeddea6286e20757fe96e4c98392cb08a7d251bf5950e4727df"} Nov 25 11:38:54 crc kubenswrapper[4706]: I1125 11:38:54.799741 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jq6ck" event={"ID":"bb17dbfb-8a35-405a-9f44-044252ee8eb4","Type":"ContainerStarted","Data":"82c49ce1557018d817b95e86aa13e27791e149df97561f6b78a7faf509c4c820"} Nov 25 11:38:54 crc kubenswrapper[4706]: I1125 11:38:54.801069 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-qlr24" event={"ID":"daffec68-fec5-4f3b-9302-4b736b09fc9c","Type":"ContainerStarted","Data":"a454ed778f2d38ff054c85f34b41ee95e3fea86b94630952885c0ba0973b889d"} Nov 25 11:38:54 crc kubenswrapper[4706]: I1125 11:38:54.802992 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-22mnp" event={"ID":"ab6319ba-e125-4775-83c3-c5624951d634","Type":"ContainerStarted","Data":"3ac3cfb15f908c0be4bc4b566d90f3edc4c7634f25294dffcca256c2564883e6"} Nov 25 11:38:54 crc kubenswrapper[4706]: I1125 11:38:54.807383 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jd66x" event={"ID":"bf1352d3-1ee8-4c51-8f45-b9fd8354fd07","Type":"ContainerStarted","Data":"86e277f7eb183d8e63417efec5595548046810cc761ea7712e14358c6f9d1f56"} Nov 25 11:38:54 crc kubenswrapper[4706]: I1125 11:38:54.808280 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fc942" event={"ID":"44180138-81cd-45b3-b14e-c21819b16645","Type":"ContainerStarted","Data":"bb165cbfca64e26356ecc2ffed1c87cc8c2d9eadd719d78024c33ac505b2dd94"} Nov 25 11:38:54 crc kubenswrapper[4706]: I1125 11:38:54.809269 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mnv7h" event={"ID":"7cd3b65b-a0b4-4cee-87ac-23925d36acb8","Type":"ContainerStarted","Data":"7ce3cd6d25691c495d9f607d76e686324a88a70bfdd79a291a80838565895662"} Nov 25 11:38:54 crc kubenswrapper[4706]: I1125 11:38:54.812409 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-99vrx" event={"ID":"0820aa13-f7b2-403e-9d85-1f940abae603","Type":"ContainerStarted","Data":"5237c226cfe068153195a4b95514b2325452118e103dee213fd1016b8e2b4165"} Nov 25 11:38:54 crc kubenswrapper[4706]: I1125 11:38:54.815638 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jhptj" event={"ID":"01b7a9e5-be6c-4a8e-9279-62eaf90e745d","Type":"ContainerStarted","Data":"6bef3e0ed2a80bed84c516c32f54ee3294718c9d55ddc493ea04c5f5b70b1fa6"} Nov 25 11:38:54 crc kubenswrapper[4706]: I1125 11:38:54.816503 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jg4ng" event={"ID":"7f36936f-00b7-4fde-9c95-8fb3433aba0a","Type":"ContainerStarted","Data":"fba9a3cc85e45cdda15d91e35ee4ccdc7f68061fb3046415ec2bb38d67b9750f"} Nov 25 11:38:54 crc kubenswrapper[4706]: I1125 11:38:54.818181 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" event={"ID":"239de662-d89b-4e6e-a970-56811041192f","Type":"ContainerStarted","Data":"daae90bb32680c0749960f3221bae7ee27ccf0dfdb8f8980f85c5620d83c1d00"} Nov 25 11:38:54 crc kubenswrapper[4706]: I1125 11:38:54.870691 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-22mnp" Nov 25 11:38:54 crc kubenswrapper[4706]: I1125 11:38:54.884946 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:54 crc kubenswrapper[4706]: E1125 11:38:54.892362 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:55.392289464 +0000 UTC m=+144.306847055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:54 crc kubenswrapper[4706]: I1125 11:38:54.901327 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9mkm"] Nov 25 11:38:54 crc kubenswrapper[4706]: I1125 11:38:54.911966 4706 patch_prober.go:28] interesting pod/router-default-5444994796-22mnp container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 25 11:38:54 crc kubenswrapper[4706]: I1125 11:38:54.912053 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-22mnp" podUID="ab6319ba-e125-4775-83c3-c5624951d634" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 25 11:38:54 crc kubenswrapper[4706]: I1125 11:38:54.946848 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q7gsh" podStartSLOduration=121.946817715 podStartE2EDuration="2m1.946817715s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:54.945169381 +0000 UTC m=+143.859726762" watchObservedRunningTime="2025-11-25 11:38:54.946817715 +0000 UTC m=+143.861375106" Nov 25 11:38:55 crc kubenswrapper[4706]: I1125 11:38:55.010550 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:55 crc kubenswrapper[4706]: E1125 11:38:55.046659 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:55.546624407 +0000 UTC m=+144.461181788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:55 crc kubenswrapper[4706]: I1125 11:38:55.046910 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:55 crc kubenswrapper[4706]: I1125 11:38:55.076851 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401170-s4f7r"] Nov 25 11:38:55 crc kubenswrapper[4706]: E1125 11:38:55.085150 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:55.585109882 +0000 UTC m=+144.499667263 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:55 crc kubenswrapper[4706]: I1125 11:38:55.089555 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-zf4pd" podStartSLOduration=122.089525372 podStartE2EDuration="2m2.089525372s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:55.028987767 +0000 UTC m=+143.943545148" watchObservedRunningTime="2025-11-25 11:38:55.089525372 +0000 UTC m=+144.004082753" Nov 25 11:38:55 crc kubenswrapper[4706]: I1125 11:38:55.148006 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:55 crc kubenswrapper[4706]: E1125 11:38:55.148656 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:55.648633357 +0000 UTC m=+144.563190738 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:55 crc kubenswrapper[4706]: I1125 11:38:55.255094 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:55 crc kubenswrapper[4706]: E1125 11:38:55.256126 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:55.756106857 +0000 UTC m=+144.670664238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:55 crc kubenswrapper[4706]: I1125 11:38:55.315949 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-8f48m" podStartSLOduration=122.315923461 podStartE2EDuration="2m2.315923461s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:55.299824304 +0000 UTC m=+144.214381685" watchObservedRunningTime="2025-11-25 11:38:55.315923461 +0000 UTC m=+144.230480842" Nov 25 11:38:55 crc kubenswrapper[4706]: I1125 11:38:55.318290 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vpgtz"] Nov 25 11:38:55 crc kubenswrapper[4706]: I1125 11:38:55.359441 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:55 crc kubenswrapper[4706]: E1125 11:38:55.366023 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:55.865992961 +0000 UTC m=+144.780550342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:55 crc kubenswrapper[4706]: I1125 11:38:55.433490 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wswtg"] Nov 25 11:38:55 crc kubenswrapper[4706]: I1125 11:38:55.467413 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:55 crc kubenswrapper[4706]: E1125 11:38:55.467786 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:55.967767536 +0000 UTC m=+144.882324927 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:55 crc kubenswrapper[4706]: I1125 11:38:55.474354 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7b2m"] Nov 25 11:38:55 crc kubenswrapper[4706]: I1125 11:38:55.487029 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jq6ck" podStartSLOduration=122.486996088 podStartE2EDuration="2m2.486996088s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:55.465130434 +0000 UTC m=+144.379687835" watchObservedRunningTime="2025-11-25 11:38:55.486996088 +0000 UTC m=+144.401553489" Nov 25 11:38:55 crc kubenswrapper[4706]: I1125 11:38:55.514521 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-22mnp" podStartSLOduration=122.514489945 podStartE2EDuration="2m2.514489945s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:55.496792164 +0000 UTC m=+144.411349545" watchObservedRunningTime="2025-11-25 11:38:55.514489945 +0000 UTC m=+144.429047326" Nov 25 11:38:55 crc kubenswrapper[4706]: I1125 11:38:55.578653 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:55 crc kubenswrapper[4706]: E1125 11:38:55.579819 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:56.079787099 +0000 UTC m=+144.994344490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:55 crc kubenswrapper[4706]: W1125 11:38:55.587270 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod916f095b_bd5f_497f_8771_aff8fd799255.slice/crio-baa0d5b5250bf08fdb252c92fd413535a766bca98ce3df1b39c9106edc637622 WatchSource:0}: Error finding container baa0d5b5250bf08fdb252c92fd413535a766bca98ce3df1b39c9106edc637622: Status 404 returned error can't find the container with id baa0d5b5250bf08fdb252c92fd413535a766bca98ce3df1b39c9106edc637622 Nov 25 11:38:55 crc kubenswrapper[4706]: I1125 11:38:55.669054 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-q466t"] Nov 25 11:38:55 crc kubenswrapper[4706]: I1125 11:38:55.686830 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:55 crc kubenswrapper[4706]: E1125 11:38:55.687467 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:56.187447753 +0000 UTC m=+145.102005134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:55 crc kubenswrapper[4706]: I1125 11:38:55.760776 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tgngn"] Nov 25 11:38:55 crc kubenswrapper[4706]: I1125 11:38:55.788137 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:55 crc kubenswrapper[4706]: E1125 11:38:55.788650 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:56.288622071 +0000 UTC m=+145.203179452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:55 crc kubenswrapper[4706]: I1125 11:38:55.820716 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bthtj"] Nov 25 11:38:55 crc kubenswrapper[4706]: I1125 11:38:55.842281 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-446sw" event={"ID":"f0084f7d-107a-484b-bc35-04f9585e0e2b","Type":"ContainerStarted","Data":"e46785a7e154509a15c6b001b734b52c652b14af383c045e902326be4b978b40"} Nov 25 11:38:55 crc kubenswrapper[4706]: I1125 11:38:55.843246 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401170-s4f7r" event={"ID":"51a87a4e-3d58-48e0-b455-292aa206e149","Type":"ContainerStarted","Data":"b727522bcf0ec2f175590fc7acead1b492f2d29aba59e5bfa3e4e1debf11d23b"} Nov 25 11:38:55 crc kubenswrapper[4706]: I1125 11:38:55.845241 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7b2m" event={"ID":"cb5c8374-6eb8-4247-97e3-ff94307782ef","Type":"ContainerStarted","Data":"d926defe84411122e10e942d19b58f9be41f69379f5199a3ddf17c211e44904a"} Nov 25 11:38:55 crc kubenswrapper[4706]: I1125 11:38:55.851495 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-2hpv7" event={"ID":"704d8383-2f51-4244-8a2a-3477cb15f23f","Type":"ContainerStarted","Data":"d047bfa744465529813225a60b0eb14dd57d520860966a5a49a25332554a2951"} Nov 25 11:38:55 crc kubenswrapper[4706]: I1125 11:38:55.897881 4706 patch_prober.go:28] interesting pod/router-default-5444994796-22mnp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 11:38:55 crc kubenswrapper[4706]: [-]has-synced failed: reason withheld Nov 25 11:38:55 crc kubenswrapper[4706]: [+]process-running ok Nov 25 11:38:55 crc kubenswrapper[4706]: healthz check failed Nov 25 11:38:55 crc kubenswrapper[4706]: I1125 11:38:55.897957 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-22mnp" podUID="ab6319ba-e125-4775-83c3-c5624951d634" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 11:38:55 crc kubenswrapper[4706]: I1125 11:38:55.898751 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cs4td" event={"ID":"198b8b13-3d25-4fbb-81af-a2a39186b64d","Type":"ContainerStarted","Data":"801822d69bc88a69a6af66e1cf3719ebb36c9a806c59bec51745527c4eafa78f"} Nov 25 11:38:55 crc kubenswrapper[4706]: I1125 11:38:55.901139 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:55 crc kubenswrapper[4706]: E1125 11:38:55.901766 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:56.401747844 +0000 UTC m=+145.316305225 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:55 crc kubenswrapper[4706]: I1125 11:38:55.907845 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-nqt58"] Nov 25 11:38:55 crc kubenswrapper[4706]: W1125 11:38:55.910449 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1ac94b4_787a_4778_8891_84b37d9e7565.slice/crio-33910e1429c851a390ce20d70d5d429a01bceee27968c03705f5c544174551ac WatchSource:0}: Error finding container 33910e1429c851a390ce20d70d5d429a01bceee27968c03705f5c544174551ac: Status 404 returned error can't find the container with id 33910e1429c851a390ce20d70d5d429a01bceee27968c03705f5c544174551ac Nov 25 11:38:55 crc kubenswrapper[4706]: I1125 11:38:55.957372 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fh2jc"] Nov 25 11:38:55 crc kubenswrapper[4706]: I1125 11:38:55.981593 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9mkm" event={"ID":"4fbe2538-0d5f-48c2-8819-7bb0386b2710","Type":"ContainerStarted","Data":"5a6c0d1016c14fe5510e1dbd019ab25169ef02f2ad6f5a767d9744dc769c9141"} Nov 25 11:38:56 crc kubenswrapper[4706]: I1125 11:38:56.000756 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wswtg" event={"ID":"916f095b-bd5f-497f-8771-aff8fd799255","Type":"ContainerStarted","Data":"baa0d5b5250bf08fdb252c92fd413535a766bca98ce3df1b39c9106edc637622"} Nov 25 11:38:56 crc kubenswrapper[4706]: I1125 11:38:56.001913 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:56 crc kubenswrapper[4706]: E1125 11:38:56.002468 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:56.502424378 +0000 UTC m=+145.416981919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:56 crc kubenswrapper[4706]: W1125 11:38:56.038026 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01c8d08c_1ad6_4048_92d4_98382da66cca.slice/crio-22732d59751e69f9a2ba1f68441ecbc6c4b7eb1801f24d36f837e44c084e0671 WatchSource:0}: Error finding container 22732d59751e69f9a2ba1f68441ecbc6c4b7eb1801f24d36f837e44c084e0671: Status 404 returned error can't find the container with id 22732d59751e69f9a2ba1f68441ecbc6c4b7eb1801f24d36f837e44c084e0671 Nov 25 11:38:56 crc kubenswrapper[4706]: I1125 11:38:56.038166 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hhh7q" event={"ID":"825f088d-44aa-4f48-b95d-6245da5b1775","Type":"ContainerStarted","Data":"f25508536c781c8d5fefcee46293ee603fe09ef1a7e4a674ce7908357c00a9a3"} Nov 25 11:38:56 crc kubenswrapper[4706]: I1125 11:38:56.062551 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zn9dk" event={"ID":"bd8d3bba-bf4e-4bda-94ff-ce2902b3299a","Type":"ContainerStarted","Data":"ef21dbd530cf63f03ebee62da4115986447472a8cc4fabe1d9dfadb6f291a233"} Nov 25 11:38:56 crc kubenswrapper[4706]: I1125 11:38:56.094655 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-vpgtz" event={"ID":"eea9f096-83bc-4f8c-b405-390011a0dd7e","Type":"ContainerStarted","Data":"c0e6169fbb992cd695c76edb5224ef462f793a69be0c15cd9a6a1750d02e80d4"} Nov 25 11:38:56 crc kubenswrapper[4706]: I1125 11:38:56.116925 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:56 crc kubenswrapper[4706]: E1125 11:38:56.117633 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:56.617614417 +0000 UTC m=+145.532171798 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:56 crc kubenswrapper[4706]: I1125 11:38:56.135283 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tf2kg" event={"ID":"55479c26-471b-4a9c-9d70-ec107786bbc4","Type":"ContainerStarted","Data":"095c46521de40b5c585bb34213b81998206323c0606480e497d23d2fd6e67e39"} Nov 25 11:38:56 crc kubenswrapper[4706]: W1125 11:38:56.138551 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7eaffd03_b03a_491f_9bc3_250a1f9021e7.slice/crio-89a256596ad8bb2df548e1e24d2ddd77c5b0325297ea57b576a330f9a2bd6e9d WatchSource:0}: Error finding container 89a256596ad8bb2df548e1e24d2ddd77c5b0325297ea57b576a330f9a2bd6e9d: Status 404 returned error can't find the container with id 89a256596ad8bb2df548e1e24d2ddd77c5b0325297ea57b576a330f9a2bd6e9d Nov 25 11:38:56 crc kubenswrapper[4706]: I1125 11:38:56.167715 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svsw6" event={"ID":"49757df3-88b5-4706-8010-139ffb01f41a","Type":"ContainerStarted","Data":"eeebffde1046488ccae5dee905c6c46bf9929bbd99c58779e950284211110465"} Nov 25 11:38:56 crc kubenswrapper[4706]: I1125 11:38:56.184571 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-67c5m" event={"ID":"96c3697f-cf07-44a2-af83-c6aae61f04f9","Type":"ContainerStarted","Data":"2728e5553be6355abe2f5382b9edf0c48c35801872c81c44ae2fce743d4b1899"} Nov 25 11:38:56 crc kubenswrapper[4706]: I1125 11:38:56.185794 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-jd66x" Nov 25 11:38:56 crc kubenswrapper[4706]: I1125 11:38:56.189910 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svsw6" podStartSLOduration=123.18988526 podStartE2EDuration="2m3.18988526s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:56.188539233 +0000 UTC m=+145.103096724" watchObservedRunningTime="2025-11-25 11:38:56.18988526 +0000 UTC m=+145.104442641" Nov 25 11:38:56 crc kubenswrapper[4706]: I1125 11:38:56.191099 4706 patch_prober.go:28] interesting pod/downloads-7954f5f757-jd66x container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Nov 25 11:38:56 crc kubenswrapper[4706]: I1125 11:38:56.191178 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jd66x" podUID="bf1352d3-1ee8-4c51-8f45-b9fd8354fd07" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Nov 25 11:38:56 crc kubenswrapper[4706]: I1125 11:38:56.211196 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-jd66x" podStartSLOduration=123.211172398 podStartE2EDuration="2m3.211172398s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:56.208149736 +0000 UTC m=+145.122707117" watchObservedRunningTime="2025-11-25 11:38:56.211172398 +0000 UTC m=+145.125729789" Nov 25 11:38:56 crc kubenswrapper[4706]: I1125 11:38:56.220036 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:56 crc kubenswrapper[4706]: E1125 11:38:56.220470 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:56.72044912 +0000 UTC m=+145.635006511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:56 crc kubenswrapper[4706]: I1125 11:38:56.322135 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:56 crc kubenswrapper[4706]: E1125 11:38:56.326495 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:56.826451569 +0000 UTC m=+145.741009140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:56 crc kubenswrapper[4706]: I1125 11:38:56.423550 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:56 crc kubenswrapper[4706]: E1125 11:38:56.423969 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:56.923944508 +0000 UTC m=+145.838501889 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:56 crc kubenswrapper[4706]: I1125 11:38:56.525791 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:56 crc kubenswrapper[4706]: E1125 11:38:56.526416 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:57.02639067 +0000 UTC m=+145.940948051 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:56 crc kubenswrapper[4706]: I1125 11:38:56.544004 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-67c5m" podStartSLOduration=123.543978388 podStartE2EDuration="2m3.543978388s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:56.239559579 +0000 UTC m=+145.154116960" watchObservedRunningTime="2025-11-25 11:38:56.543978388 +0000 UTC m=+145.458535779" Nov 25 11:38:56 crc kubenswrapper[4706]: I1125 11:38:56.547498 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jsj27"] Nov 25 11:38:56 crc kubenswrapper[4706]: I1125 11:38:56.562042 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rs94g"] Nov 25 11:38:56 crc kubenswrapper[4706]: W1125 11:38:56.625266 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3172a49_2bd1_4003_8ef0_560d4522e410.slice/crio-e5ffd61446d2bf999489aa0a391cbe4894507f2f4478167c4163c984d210003f WatchSource:0}: Error finding container e5ffd61446d2bf999489aa0a391cbe4894507f2f4478167c4163c984d210003f: Status 404 returned error can't find the container with id e5ffd61446d2bf999489aa0a391cbe4894507f2f4478167c4163c984d210003f Nov 25 11:38:56 crc kubenswrapper[4706]: I1125 11:38:56.626601 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:56 crc kubenswrapper[4706]: E1125 11:38:56.627216 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:57.127196687 +0000 UTC m=+146.041754078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:56 crc kubenswrapper[4706]: I1125 11:38:56.730469 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:56 crc kubenswrapper[4706]: E1125 11:38:56.731057 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:57.231040198 +0000 UTC m=+146.145597589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:56 crc kubenswrapper[4706]: I1125 11:38:56.832193 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:56 crc kubenswrapper[4706]: E1125 11:38:56.835796 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:57.335766123 +0000 UTC m=+146.250323514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:56 crc kubenswrapper[4706]: I1125 11:38:56.880838 4706 patch_prober.go:28] interesting pod/router-default-5444994796-22mnp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 11:38:56 crc kubenswrapper[4706]: [-]has-synced failed: reason withheld Nov 25 11:38:56 crc kubenswrapper[4706]: [+]process-running ok Nov 25 11:38:56 crc kubenswrapper[4706]: healthz check failed Nov 25 11:38:56 crc kubenswrapper[4706]: I1125 11:38:56.881454 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-22mnp" podUID="ab6319ba-e125-4775-83c3-c5624951d634" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 11:38:56 crc kubenswrapper[4706]: I1125 11:38:56.936705 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:56 crc kubenswrapper[4706]: E1125 11:38:56.937072 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:57.437059034 +0000 UTC m=+146.351616415 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.040056 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:57 crc kubenswrapper[4706]: E1125 11:38:57.040764 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:57.54074101 +0000 UTC m=+146.455298391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.142099 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:57 crc kubenswrapper[4706]: E1125 11:38:57.142762 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:57.642740611 +0000 UTC m=+146.557297992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.247482 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:57 crc kubenswrapper[4706]: E1125 11:38:57.247999 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:57.747974719 +0000 UTC m=+146.662532100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.248291 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9mkm" event={"ID":"4fbe2538-0d5f-48c2-8819-7bb0386b2710","Type":"ContainerStarted","Data":"35b725a52e7542fd5fd5b772f6b20e588b766ee0f33132580e282339c00556f7"} Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.249068 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9mkm" Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.257279 4706 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-s9mkm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.258160 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9mkm" podUID="4fbe2538-0d5f-48c2-8819-7bb0386b2710" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.260435 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fh2jc" event={"ID":"cb8f2779-a7df-4ead-a209-9e8024e20647","Type":"ContainerStarted","Data":"97bca58b09cc42e27da0e0fa5afa9a120d12406b2e2213e7f78e648f971351ca"} Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.273485 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fc942" event={"ID":"44180138-81cd-45b3-b14e-c21819b16645","Type":"ContainerStarted","Data":"620130193ec98adc080f82999645ac06d9c781531eef106c29b02324b72f7d1a"} Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.291647 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mnv7h" event={"ID":"7cd3b65b-a0b4-4cee-87ac-23925d36acb8","Type":"ContainerStarted","Data":"db288391e54d25837f906ab55593b46cce11d4e885a214a902f1b35d46042cec"} Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.301676 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9mkm" podStartSLOduration=124.301657477 podStartE2EDuration="2m4.301657477s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:57.29661081 +0000 UTC m=+146.211168191" watchObservedRunningTime="2025-11-25 11:38:57.301657477 +0000 UTC m=+146.216214858" Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.333702 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-446sw" event={"ID":"f0084f7d-107a-484b-bc35-04f9585e0e2b","Type":"ContainerStarted","Data":"b95dfaa208bd083c66e4063f51bee2a56b9bbaba009481fd6bda5b761fb586f9"} Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.353570 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:57 crc kubenswrapper[4706]: E1125 11:38:57.354807 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:57.854791971 +0000 UTC m=+146.769349352 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.358152 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-2hpv7" event={"ID":"704d8383-2f51-4244-8a2a-3477cb15f23f","Type":"ContainerStarted","Data":"c0371474c4fb552482c00250651f24ddafbcac5f9e2bd36acae05895ea11ae46"} Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.371433 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fc942" podStartSLOduration=124.371412182 podStartE2EDuration="2m4.371412182s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:57.369569882 +0000 UTC m=+146.284127253" watchObservedRunningTime="2025-11-25 11:38:57.371412182 +0000 UTC m=+146.285969573" Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.371775 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bthtj" event={"ID":"92d6e6ef-5880-4bdf-bdc5-5d2c4591a094","Type":"ContainerStarted","Data":"1a6cc556ee88660b6d56030735580f30c15abb8335ac29167b9184011e37bfb4"} Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.395836 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6hgvx" event={"ID":"33afcb8d-d045-4897-af65-56b622cdfa58","Type":"ContainerStarted","Data":"72797c387dd586bb6bb661a1fef3b7163a0740d9d514cf35b67f7b0a69d50541"} Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.440826 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-2hpv7" podStartSLOduration=124.440794487 podStartE2EDuration="2m4.440794487s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:57.428087962 +0000 UTC m=+146.342645373" watchObservedRunningTime="2025-11-25 11:38:57.440794487 +0000 UTC m=+146.355351868" Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.447421 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-qlr24" event={"ID":"daffec68-fec5-4f3b-9302-4b736b09fc9c","Type":"ContainerStarted","Data":"3c3bc3aec0512487ced9dfaac06c521b7551e4f90112ed28b3e6712849970550"} Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.448527 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-qlr24" Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.457524 4706 patch_prober.go:28] interesting pod/console-operator-58897d9998-qlr24 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.457616 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-qlr24" podUID="daffec68-fec5-4f3b-9302-4b736b09fc9c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.458716 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:57 crc kubenswrapper[4706]: E1125 11:38:57.460218 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:57.960197454 +0000 UTC m=+146.874754845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.486373 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cs4td" event={"ID":"198b8b13-3d25-4fbb-81af-a2a39186b64d","Type":"ContainerStarted","Data":"232899d3e933ab63fee8cbeeecc2711e77062353a921961be4b6e086cbf5c2ce"} Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.558031 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jg4ng" event={"ID":"7f36936f-00b7-4fde-9c95-8fb3433aba0a","Type":"ContainerStarted","Data":"40a66113920f177ead189567fb66b63cc2ef61fa412c853492b0b7901ba5662d"} Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.559923 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-qlr24" podStartSLOduration=124.559903962 podStartE2EDuration="2m4.559903962s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:57.558816773 +0000 UTC m=+146.473374154" watchObservedRunningTime="2025-11-25 11:38:57.559903962 +0000 UTC m=+146.474461343" Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.560139 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:57 crc kubenswrapper[4706]: E1125 11:38:57.561484 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:58.061467015 +0000 UTC m=+146.976024396 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.566797 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-446sw" podStartSLOduration=7.566767449 podStartE2EDuration="7.566767449s" podCreationTimestamp="2025-11-25 11:38:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:57.497126367 +0000 UTC m=+146.411683748" watchObservedRunningTime="2025-11-25 11:38:57.566767449 +0000 UTC m=+146.481324830" Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.616720 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hhh7q" event={"ID":"825f088d-44aa-4f48-b95d-6245da5b1775","Type":"ContainerStarted","Data":"3349d43cceba3f6ac02a3ac918928b28fd9436447b8117f2d210456c944f4ac4"} Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.619427 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cs4td" podStartSLOduration=124.619397628 podStartE2EDuration="2m4.619397628s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:57.616983603 +0000 UTC m=+146.531540984" watchObservedRunningTime="2025-11-25 11:38:57.619397628 +0000 UTC m=+146.533955009" Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.623495 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tf2kg" event={"ID":"55479c26-471b-4a9c-9d70-ec107786bbc4","Type":"ContainerStarted","Data":"046f782c7c1a43004ce4390b078f2e166b60155e9049cbed622f90fa891943b1"} Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.654666 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6hgvx" podStartSLOduration=124.654637845 podStartE2EDuration="2m4.654637845s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:57.649933608 +0000 UTC m=+146.564490989" watchObservedRunningTime="2025-11-25 11:38:57.654637845 +0000 UTC m=+146.569195226" Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.664680 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:57 crc kubenswrapper[4706]: E1125 11:38:57.665369 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:58.165340276 +0000 UTC m=+147.079897657 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.711813 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hhh7q" podStartSLOduration=124.711791998 podStartE2EDuration="2m4.711791998s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:57.711405967 +0000 UTC m=+146.625963368" watchObservedRunningTime="2025-11-25 11:38:57.711791998 +0000 UTC m=+146.626349389" Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.736229 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr" event={"ID":"f6ce79ff-bc51-4375-bd97-7e6ba29f263d","Type":"ContainerStarted","Data":"3381b08438ab2bb7879671f111f21243e8438444bc8e396213e06b054bad5b0f"} Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.757838 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-9z28x" podStartSLOduration=124.757811388 podStartE2EDuration="2m4.757811388s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:57.751982929 +0000 UTC m=+146.666540310" watchObservedRunningTime="2025-11-25 11:38:57.757811388 +0000 UTC m=+146.672368769" Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.763266 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jsj27" event={"ID":"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a","Type":"ContainerStarted","Data":"873138fa4d6c8c92c7d2ba4863e8175413eb2b8cacf9ad74e91531e8a3d7f90e"} Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.768868 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:57 crc kubenswrapper[4706]: E1125 11:38:57.770956 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:58.270935484 +0000 UTC m=+147.185492865 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.794189 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rs94g" event={"ID":"d3172a49-2bd1-4003-8ef0-560d4522e410","Type":"ContainerStarted","Data":"e5ffd61446d2bf999489aa0a391cbe4894507f2f4478167c4163c984d210003f"} Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.795952 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tgngn" event={"ID":"01c8d08c-1ad6-4048-92d4-98382da66cca","Type":"ContainerStarted","Data":"22732d59751e69f9a2ba1f68441ecbc6c4b7eb1801f24d36f837e44c084e0671"} Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.811288 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zn9dk" event={"ID":"bd8d3bba-bf4e-4bda-94ff-ce2902b3299a","Type":"ContainerStarted","Data":"e1d472d4907ff5bc21dee43ddf20267a8593cd34b3567fa36c0d083869575729"} Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.815835 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zn9dk" Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.818440 4706 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zn9dk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.818505 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zn9dk" podUID="bd8d3bba-bf4e-4bda-94ff-ce2902b3299a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.839157 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6nqn" event={"ID":"09d713da-8021-4bfa-b39d-bc3399593865","Type":"ContainerStarted","Data":"f135fb3ff39082c07e21aaae96f08d26e8201e6438dd86d861ba7aefbef64c9e"} Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.840118 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6nqn" Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.842156 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr" podStartSLOduration=124.842141438 podStartE2EDuration="2m4.842141438s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:57.811019773 +0000 UTC m=+146.725577154" watchObservedRunningTime="2025-11-25 11:38:57.842141438 +0000 UTC m=+146.756698819" Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.843064 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-zn9dk" podStartSLOduration=124.843055203 podStartE2EDuration="2m4.843055203s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:57.841665005 +0000 UTC m=+146.756222406" watchObservedRunningTime="2025-11-25 11:38:57.843055203 +0000 UTC m=+146.757612594" Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.863992 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-q466t" event={"ID":"f1ac94b4-787a-4778-8891-84b37d9e7565","Type":"ContainerStarted","Data":"65b049d14da27beef0530d7483fb7a019afc7a6478f7183b50a028e037fa48f8"} Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.864059 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-q466t" event={"ID":"f1ac94b4-787a-4778-8891-84b37d9e7565","Type":"ContainerStarted","Data":"33910e1429c851a390ce20d70d5d429a01bceee27968c03705f5c544174551ac"} Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.876235 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.876674 4706 patch_prober.go:28] interesting pod/router-default-5444994796-22mnp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 11:38:57 crc kubenswrapper[4706]: [-]has-synced failed: reason withheld Nov 25 11:38:57 crc kubenswrapper[4706]: [+]process-running ok Nov 25 11:38:57 crc kubenswrapper[4706]: healthz check failed Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.876740 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-22mnp" podUID="ab6319ba-e125-4775-83c3-c5624951d634" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 11:38:57 crc kubenswrapper[4706]: E1125 11:38:57.877703 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:58.377682714 +0000 UTC m=+147.292240245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.891190 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6nqn" podStartSLOduration=124.89116819 podStartE2EDuration="2m4.89116819s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:57.889579377 +0000 UTC m=+146.804136758" watchObservedRunningTime="2025-11-25 11:38:57.89116819 +0000 UTC m=+146.805725571" Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.895605 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-99vrx" event={"ID":"0820aa13-f7b2-403e-9d85-1f940abae603","Type":"ContainerStarted","Data":"6eee746bc863d5615ff1034273aabdea8e9ae61e305771fa0961ef85af975a22"} Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.907240 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-vpgtz" event={"ID":"eea9f096-83bc-4f8c-b405-390011a0dd7e","Type":"ContainerStarted","Data":"fdff4c6e1cf7c8ab870e6469bad28bc7e28861773e3db0947e82618c8c533a61"} Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.924561 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-q466t" podStartSLOduration=7.924543846 podStartE2EDuration="7.924543846s" podCreationTimestamp="2025-11-25 11:38:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:57.922801809 +0000 UTC m=+146.837359190" watchObservedRunningTime="2025-11-25 11:38:57.924543846 +0000 UTC m=+146.839101227" Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.950804 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jhptj" event={"ID":"01b7a9e5-be6c-4a8e-9279-62eaf90e745d","Type":"ContainerStarted","Data":"a3a88bdc7addddc9950987c161eccda016eb68a9f6e95fe2f3f26759f69d5476"} Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.970054 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401170-s4f7r" event={"ID":"51a87a4e-3d58-48e0-b455-292aa206e149","Type":"ContainerStarted","Data":"2c5dfa9cb2ce5d6cbb777e4b005be38591922269782460a54c83a0a317b49885"} Nov 25 11:38:57 crc kubenswrapper[4706]: I1125 11:38:57.985548 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:57 crc kubenswrapper[4706]: E1125 11:38:57.990058 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:58.490034095 +0000 UTC m=+147.404591666 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:58 crc kubenswrapper[4706]: I1125 11:38:58.007776 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-vpgtz" podStartSLOduration=125.007756517 podStartE2EDuration="2m5.007756517s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:58.006931094 +0000 UTC m=+146.921488495" watchObservedRunningTime="2025-11-25 11:38:58.007756517 +0000 UTC m=+146.922313908" Nov 25 11:38:58 crc kubenswrapper[4706]: I1125 11:38:58.008290 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-99vrx" podStartSLOduration=125.008284841 podStartE2EDuration="2m5.008284841s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:57.968768658 +0000 UTC m=+146.883326049" watchObservedRunningTime="2025-11-25 11:38:58.008284841 +0000 UTC m=+146.922842222" Nov 25 11:38:58 crc kubenswrapper[4706]: I1125 11:38:58.042220 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nqt58" event={"ID":"7eaffd03-b03a-491f-9bc3-250a1f9021e7","Type":"ContainerStarted","Data":"89a256596ad8bb2df548e1e24d2ddd77c5b0325297ea57b576a330f9a2bd6e9d"} Nov 25 11:38:58 crc kubenswrapper[4706]: I1125 11:38:58.073106 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d9vjp" event={"ID":"3eaaf4f5-59b0-4ab7-a865-e962b59f0584","Type":"ContainerStarted","Data":"a21315ea1b045ce80abeb33b39436c7a8f3d19cffe94164da7069aa0f1b3170c"} Nov 25 11:38:58 crc kubenswrapper[4706]: I1125 11:38:58.087087 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29401170-s4f7r" podStartSLOduration=125.087055911 podStartE2EDuration="2m5.087055911s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:58.045648046 +0000 UTC m=+146.960205437" watchObservedRunningTime="2025-11-25 11:38:58.087055911 +0000 UTC m=+147.001613292" Nov 25 11:38:58 crc kubenswrapper[4706]: I1125 11:38:58.087540 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:58 crc kubenswrapper[4706]: I1125 11:38:58.088969 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" event={"ID":"239de662-d89b-4e6e-a970-56811041192f","Type":"ContainerStarted","Data":"40945b717e08512d258602a1271a882fb8523358c4730c45304ef511f37b7dcb"} Nov 25 11:38:58 crc kubenswrapper[4706]: I1125 11:38:58.089968 4706 patch_prober.go:28] interesting pod/downloads-7954f5f757-jd66x container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Nov 25 11:38:58 crc kubenswrapper[4706]: I1125 11:38:58.091992 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jd66x" podUID="bf1352d3-1ee8-4c51-8f45-b9fd8354fd07" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Nov 25 11:38:58 crc kubenswrapper[4706]: I1125 11:38:58.090111 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:58 crc kubenswrapper[4706]: E1125 11:38:58.095975 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:58.595946092 +0000 UTC m=+147.510503473 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:58 crc kubenswrapper[4706]: I1125 11:38:58.097251 4706 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-ss2xd container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.14:6443/healthz\": dial tcp 10.217.0.14:6443: connect: connection refused" start-of-body= Nov 25 11:38:58 crc kubenswrapper[4706]: I1125 11:38:58.097340 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" podUID="239de662-d89b-4e6e-a970-56811041192f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.14:6443/healthz\": dial tcp 10.217.0.14:6443: connect: connection refused" Nov 25 11:38:58 crc kubenswrapper[4706]: I1125 11:38:58.098277 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jhptj" podStartSLOduration=125.098261915 podStartE2EDuration="2m5.098261915s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:58.076883414 +0000 UTC m=+146.991440816" watchObservedRunningTime="2025-11-25 11:38:58.098261915 +0000 UTC m=+147.012819296" Nov 25 11:38:58 crc kubenswrapper[4706]: I1125 11:38:58.099100 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nqt58" podStartSLOduration=125.099093658 podStartE2EDuration="2m5.099093658s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:58.099032706 +0000 UTC m=+147.013590087" watchObservedRunningTime="2025-11-25 11:38:58.099093658 +0000 UTC m=+147.013651029" Nov 25 11:38:58 crc kubenswrapper[4706]: I1125 11:38:58.143480 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" podStartSLOduration=126.143456673 podStartE2EDuration="2m6.143456673s" podCreationTimestamp="2025-11-25 11:36:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:58.143255977 +0000 UTC m=+147.057813358" watchObservedRunningTime="2025-11-25 11:38:58.143456673 +0000 UTC m=+147.058014054" Nov 25 11:38:58 crc kubenswrapper[4706]: I1125 11:38:58.189142 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:58 crc kubenswrapper[4706]: E1125 11:38:58.192445 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:58.692419183 +0000 UTC m=+147.606976784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:58 crc kubenswrapper[4706]: I1125 11:38:58.248653 4706 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-w6nqn container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Nov 25 11:38:58 crc kubenswrapper[4706]: I1125 11:38:58.248757 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6nqn" podUID="09d713da-8021-4bfa-b39d-bc3399593865" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Nov 25 11:38:58 crc kubenswrapper[4706]: I1125 11:38:58.249155 4706 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-w6nqn container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Nov 25 11:38:58 crc kubenswrapper[4706]: I1125 11:38:58.249177 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6nqn" podUID="09d713da-8021-4bfa-b39d-bc3399593865" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Nov 25 11:38:58 crc kubenswrapper[4706]: I1125 11:38:58.290754 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:58 crc kubenswrapper[4706]: E1125 11:38:58.291345 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:58.791294228 +0000 UTC m=+147.705851609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:58 crc kubenswrapper[4706]: I1125 11:38:58.393243 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:58 crc kubenswrapper[4706]: E1125 11:38:58.393843 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:58.893820303 +0000 UTC m=+147.808377684 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:58 crc kubenswrapper[4706]: I1125 11:38:58.495867 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:58 crc kubenswrapper[4706]: E1125 11:38:58.496193 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:58.996159133 +0000 UTC m=+147.910716514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:58 crc kubenswrapper[4706]: I1125 11:38:58.496435 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:58 crc kubenswrapper[4706]: E1125 11:38:58.496859 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:58.996848752 +0000 UTC m=+147.911406123 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:58 crc kubenswrapper[4706]: I1125 11:38:58.598202 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:58 crc kubenswrapper[4706]: E1125 11:38:58.598467 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:59.098430671 +0000 UTC m=+148.012988062 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:58 crc kubenswrapper[4706]: I1125 11:38:58.598979 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:58 crc kubenswrapper[4706]: E1125 11:38:58.599433 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:59.099415378 +0000 UTC m=+148.013972759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:58 crc kubenswrapper[4706]: E1125 11:38:58.700876 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:59.200849893 +0000 UTC m=+148.115407274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:58 crc kubenswrapper[4706]: I1125 11:38:58.700750 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:58 crc kubenswrapper[4706]: E1125 11:38:58.701721 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:59.201709436 +0000 UTC m=+148.116266817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:58 crc kubenswrapper[4706]: I1125 11:38:58.701285 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:58 crc kubenswrapper[4706]: I1125 11:38:58.802846 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:58 crc kubenswrapper[4706]: E1125 11:38:58.803049 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:59.303016188 +0000 UTC m=+148.217573569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:58 crc kubenswrapper[4706]: I1125 11:38:58.803262 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:58 crc kubenswrapper[4706]: E1125 11:38:58.803713 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:59.303701687 +0000 UTC m=+148.218259198 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:58 crc kubenswrapper[4706]: I1125 11:38:58.870912 4706 patch_prober.go:28] interesting pod/router-default-5444994796-22mnp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 11:38:58 crc kubenswrapper[4706]: [-]has-synced failed: reason withheld Nov 25 11:38:58 crc kubenswrapper[4706]: [+]process-running ok Nov 25 11:38:58 crc kubenswrapper[4706]: healthz check failed Nov 25 11:38:58 crc kubenswrapper[4706]: I1125 11:38:58.871040 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-22mnp" podUID="ab6319ba-e125-4775-83c3-c5624951d634" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 11:38:58 crc kubenswrapper[4706]: I1125 11:38:58.905049 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:58 crc kubenswrapper[4706]: E1125 11:38:58.905514 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:59.405489691 +0000 UTC m=+148.320047072 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.007728 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:59 crc kubenswrapper[4706]: E1125 11:38:59.008203 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:59.508182401 +0000 UTC m=+148.422739782 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.095857 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fh2jc" event={"ID":"cb8f2779-a7df-4ead-a209-9e8024e20647","Type":"ContainerStarted","Data":"08beeecdcc7ffc5c0c9d34ab0cb0af707260916df8956df7ecdc5089e1dfb449"} Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.095921 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fh2jc" event={"ID":"cb8f2779-a7df-4ead-a209-9e8024e20647","Type":"ContainerStarted","Data":"188d5957e88b4ded2260be162e3fcb45f2af77ac5a4c46a34bf838470c9e0c40"} Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.099377 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jg4ng" event={"ID":"7f36936f-00b7-4fde-9c95-8fb3433aba0a","Type":"ContainerStarted","Data":"96bc1b4cecc496aa36a14bba80f39fd864f775159a2dd0aa49b40110ec17bc4e"} Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.102891 4706 generic.go:334] "Generic (PLEG): container finished" podID="d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a" containerID="7315c181fcb8d649f6901aae6b094459b50cbe8153d8b7a9af40fbd8e139e42c" exitCode=0 Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.103022 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jsj27" event={"ID":"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a","Type":"ContainerDied","Data":"7315c181fcb8d649f6901aae6b094459b50cbe8153d8b7a9af40fbd8e139e42c"} Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.107573 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d9vjp" event={"ID":"3eaaf4f5-59b0-4ab7-a865-e962b59f0584","Type":"ContainerStarted","Data":"b5a29f253f42d958078183827ba0d1920dffdeb7d175c69cf59b09f3915b52b6"} Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.115095 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:59 crc kubenswrapper[4706]: E1125 11:38:59.115240 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:59.615214218 +0000 UTC m=+148.529771599 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.115873 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:59 crc kubenswrapper[4706]: E1125 11:38:59.116387 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:59.6163694 +0000 UTC m=+148.530926791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.130892 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-fh2jc" podStartSLOduration=126.130860703 podStartE2EDuration="2m6.130860703s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:59.127275836 +0000 UTC m=+148.041833227" watchObservedRunningTime="2025-11-25 11:38:59.130860703 +0000 UTC m=+148.045418084" Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.132707 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rs94g" event={"ID":"d3172a49-2bd1-4003-8ef0-560d4522e410","Type":"ContainerStarted","Data":"43b96da4ad0316ae74ea15ecd6946c0b37fe2652ac28854c1b688e391c1d40fc"} Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.132796 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rs94g" event={"ID":"d3172a49-2bd1-4003-8ef0-560d4522e410","Type":"ContainerStarted","Data":"0fc5884ed506d3545f75b5d16220b3e9849d935103db8f8a290ddfbd1062f6ea"} Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.136466 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-9z28x" event={"ID":"ab2dd029-844e-4783-8fda-bfab6a6d9243","Type":"ContainerStarted","Data":"bddec8fa86b4c5e17994ee627cc603494f236780fad9d33804389cde20acecb5"} Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.139406 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mnv7h" event={"ID":"7cd3b65b-a0b4-4cee-87ac-23925d36acb8","Type":"ContainerStarted","Data":"1984300aaaea5fcc001df23d9254012b784a237d07750726f62ea5e31e820962"} Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.141723 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tf2kg" event={"ID":"55479c26-471b-4a9c-9d70-ec107786bbc4","Type":"ContainerStarted","Data":"d6267915514876cc51689a1704a39174620a2b4105db1a93e0d073ee229b4da6"} Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.157790 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d9vjp" podStartSLOduration=127.157767404 podStartE2EDuration="2m7.157767404s" podCreationTimestamp="2025-11-25 11:36:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:59.156180091 +0000 UTC m=+148.070737472" watchObservedRunningTime="2025-11-25 11:38:59.157767404 +0000 UTC m=+148.072324785" Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.183625 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wswtg" event={"ID":"916f095b-bd5f-497f-8771-aff8fd799255","Type":"ContainerStarted","Data":"d01d830d0aca4aaeda2d4d56d70369126867f35b16c7f00fcc6180205687d23b"} Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.183690 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wswtg" event={"ID":"916f095b-bd5f-497f-8771-aff8fd799255","Type":"ContainerStarted","Data":"89261e531044395d9a3108298fce1359d716925af1eaf491efb784700252f288"} Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.184593 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-wswtg" Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.195845 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jhptj" event={"ID":"01b7a9e5-be6c-4a8e-9279-62eaf90e745d","Type":"ContainerStarted","Data":"9b10f38bee8701b890e5318e61b3a5c685c88502f6804d3cfe4280fa805f68fb"} Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.215860 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7b2m" event={"ID":"cb5c8374-6eb8-4247-97e3-ff94307782ef","Type":"ContainerStarted","Data":"fca9288b2907cad17baa50dce848a0e1e8e56e27d6f04b17d9b0c95030aaa6ee"} Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.216080 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jg4ng" podStartSLOduration=126.216062817 podStartE2EDuration="2m6.216062817s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:59.213988181 +0000 UTC m=+148.128545562" watchObservedRunningTime="2025-11-25 11:38:59.216062817 +0000 UTC m=+148.130620198" Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.216579 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7b2m" Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.217073 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:59 crc kubenswrapper[4706]: E1125 11:38:59.218779 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:59.718758511 +0000 UTC m=+148.633315912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.220465 4706 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-x7b2m container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.220524 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7b2m" podUID="cb5c8374-6eb8-4247-97e3-ff94307782ef" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.236988 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bthtj" event={"ID":"92d6e6ef-5880-4bdf-bdc5-5d2c4591a094","Type":"ContainerStarted","Data":"76deba9e9ad3f0ceb3c2260dbcd9817e9556efaae9c45672a0fc791c677e4539"} Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.237277 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bthtj" Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.243590 4706 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-bthtj container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:5443/healthz\": dial tcp 10.217.0.29:5443: connect: connection refused" start-of-body= Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.243685 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bthtj" podUID="92d6e6ef-5880-4bdf-bdc5-5d2c4591a094" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.29:5443/healthz\": dial tcp 10.217.0.29:5443: connect: connection refused" Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.245495 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-mnv7h" podStartSLOduration=126.245470586 podStartE2EDuration="2m6.245470586s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:59.241526619 +0000 UTC m=+148.156084000" watchObservedRunningTime="2025-11-25 11:38:59.245470586 +0000 UTC m=+148.160027967" Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.271694 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nqt58" event={"ID":"7eaffd03-b03a-491f-9bc3-250a1f9021e7","Type":"ContainerStarted","Data":"350b4c4dabe2b728c7009a6063072cca1b963a2da584f88fdc6d963df790747f"} Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.275696 4706 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zn9dk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.275743 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zn9dk" podUID="bd8d3bba-bf4e-4bda-94ff-ce2902b3299a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.284704 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6nqn" Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.298415 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s9mkm" Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.303633 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tf2kg" podStartSLOduration=126.303603285 podStartE2EDuration="2m6.303603285s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:59.285041911 +0000 UTC m=+148.199599292" watchObservedRunningTime="2025-11-25 11:38:59.303603285 +0000 UTC m=+148.218160666" Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.329209 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:59 crc kubenswrapper[4706]: E1125 11:38:59.363152 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:38:59.863124632 +0000 UTC m=+148.777682003 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.430475 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:59 crc kubenswrapper[4706]: E1125 11:38:59.432110 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:38:59.932082055 +0000 UTC m=+148.846639436 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.489731 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rs94g" podStartSLOduration=126.48970618 podStartE2EDuration="2m6.48970618s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:59.377061171 +0000 UTC m=+148.291618562" watchObservedRunningTime="2025-11-25 11:38:59.48970618 +0000 UTC m=+148.404263561" Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.490592 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-wswtg" podStartSLOduration=9.490581984 podStartE2EDuration="9.490581984s" podCreationTimestamp="2025-11-25 11:38:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:59.486873173 +0000 UTC m=+148.401430564" watchObservedRunningTime="2025-11-25 11:38:59.490581984 +0000 UTC m=+148.405139365" Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.532419 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:59 crc kubenswrapper[4706]: E1125 11:38:59.532788 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:39:00.03277164 +0000 UTC m=+148.947329021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.535395 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7b2m" podStartSLOduration=126.535376781 podStartE2EDuration="2m6.535376781s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:59.526981323 +0000 UTC m=+148.441538704" watchObservedRunningTime="2025-11-25 11:38:59.535376781 +0000 UTC m=+148.449934172" Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.627803 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bthtj" podStartSLOduration=126.627777231 podStartE2EDuration="2m6.627777231s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:38:59.58982837 +0000 UTC m=+148.504385771" watchObservedRunningTime="2025-11-25 11:38:59.627777231 +0000 UTC m=+148.542334612" Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.633845 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:59 crc kubenswrapper[4706]: E1125 11:38:59.634113 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:39:00.134070042 +0000 UTC m=+149.048627433 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.634645 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:59 crc kubenswrapper[4706]: E1125 11:38:59.635105 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:39:00.13509316 +0000 UTC m=+149.049650541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.695871 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.736658 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:59 crc kubenswrapper[4706]: E1125 11:38:59.737448 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:39:00.237424499 +0000 UTC m=+149.151981880 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.737501 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:59 crc kubenswrapper[4706]: E1125 11:38:59.738018 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:39:00.237991635 +0000 UTC m=+149.152549226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.838665 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.838974 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.839083 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.839110 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:38:59 crc kubenswrapper[4706]: E1125 11:38:59.839476 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:39:00.33944819 +0000 UTC m=+149.254005711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.850538 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.868193 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.872886 4706 patch_prober.go:28] interesting pod/router-default-5444994796-22mnp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 11:38:59 crc kubenswrapper[4706]: [-]has-synced failed: reason withheld Nov 25 11:38:59 crc kubenswrapper[4706]: [+]process-running ok Nov 25 11:38:59 crc kubenswrapper[4706]: healthz check failed Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.872957 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-22mnp" podUID="ab6319ba-e125-4775-83c3-c5624951d634" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.873939 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.942101 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.942234 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.942423 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 11:38:59 crc kubenswrapper[4706]: E1125 11:38:59.942651 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:39:00.442633343 +0000 UTC m=+149.357190724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.946403 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:38:59 crc kubenswrapper[4706]: I1125 11:38:59.949681 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.042958 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:39:00 crc kubenswrapper[4706]: E1125 11:39:00.043512 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:39:00.543494133 +0000 UTC m=+149.458051504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.148951 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:39:00 crc kubenswrapper[4706]: E1125 11:39:00.149781 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:39:00.649754658 +0000 UTC m=+149.564312039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.246769 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.250955 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:39:00 crc kubenswrapper[4706]: E1125 11:39:00.251087 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:39:00.75106416 +0000 UTC m=+149.665621531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.251229 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:39:00 crc kubenswrapper[4706]: E1125 11:39:00.251645 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:39:00.751628965 +0000 UTC m=+149.666186346 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.266349 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mlg4m"] Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.267601 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mlg4m" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.276423 4706 patch_prober.go:28] interesting pod/console-operator-58897d9998-qlr24 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.276506 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-qlr24" podUID="daffec68-fec5-4f3b-9302-4b736b09fc9c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 25 11:39:00 crc kubenswrapper[4706]: W1125 11:39:00.277176 4706 reflector.go:561] object-"openshift-marketplace"/"community-operators-dockercfg-dmngl": failed to list *v1.Secret: secrets "community-operators-dockercfg-dmngl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Nov 25 11:39:00 crc kubenswrapper[4706]: E1125 11:39:00.277213 4706 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"community-operators-dockercfg-dmngl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"community-operators-dockercfg-dmngl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.356693 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:39:00 crc kubenswrapper[4706]: E1125 11:39:00.356934 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:39:00.856897345 +0000 UTC m=+149.771454716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.357459 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.357533 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efdf993e-c4c2-4eff-877d-03df2af9d43c-catalog-content\") pod \"community-operators-mlg4m\" (UID: \"efdf993e-c4c2-4eff-877d-03df2af9d43c\") " pod="openshift-marketplace/community-operators-mlg4m" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.357568 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efdf993e-c4c2-4eff-877d-03df2af9d43c-utilities\") pod \"community-operators-mlg4m\" (UID: \"efdf993e-c4c2-4eff-877d-03df2af9d43c\") " pod="openshift-marketplace/community-operators-mlg4m" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.357606 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8g24\" (UniqueName: \"kubernetes.io/projected/efdf993e-c4c2-4eff-877d-03df2af9d43c-kube-api-access-f8g24\") pod \"community-operators-mlg4m\" (UID: \"efdf993e-c4c2-4eff-877d-03df2af9d43c\") " pod="openshift-marketplace/community-operators-mlg4m" Nov 25 11:39:00 crc kubenswrapper[4706]: E1125 11:39:00.358029 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:39:00.858009355 +0000 UTC m=+149.772566736 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.388026 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jsj27" event={"ID":"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a","Type":"ContainerStarted","Data":"54f5da3fdd633c3f5793d36d76679aa54df32b9ac575b93cc2b18e1752d373f6"} Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.404470 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mlg4m"] Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.414129 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tgngn" event={"ID":"01c8d08c-1ad6-4048-92d4-98382da66cca","Type":"ContainerStarted","Data":"e9429c8c3f043414e1a8d7cf3202c94eb65c627b0d99d9268fb59ab758b1ecec"} Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.414497 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rs94g" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.415267 4706 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zn9dk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.415639 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zn9dk" podUID="bd8d3bba-bf4e-4bda-94ff-ce2902b3299a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.435939 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x7b2m" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.456792 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h8tj2"] Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.458005 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h8tj2" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.458135 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.458624 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8g24\" (UniqueName: \"kubernetes.io/projected/efdf993e-c4c2-4eff-877d-03df2af9d43c-kube-api-access-f8g24\") pod \"community-operators-mlg4m\" (UID: \"efdf993e-c4c2-4eff-877d-03df2af9d43c\") " pod="openshift-marketplace/community-operators-mlg4m" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.458859 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efdf993e-c4c2-4eff-877d-03df2af9d43c-catalog-content\") pod \"community-operators-mlg4m\" (UID: \"efdf993e-c4c2-4eff-877d-03df2af9d43c\") " pod="openshift-marketplace/community-operators-mlg4m" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.458886 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efdf993e-c4c2-4eff-877d-03df2af9d43c-utilities\") pod \"community-operators-mlg4m\" (UID: \"efdf993e-c4c2-4eff-877d-03df2af9d43c\") " pod="openshift-marketplace/community-operators-mlg4m" Nov 25 11:39:00 crc kubenswrapper[4706]: E1125 11:39:00.459109 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:39:00.959093751 +0000 UTC m=+149.873651132 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.459854 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efdf993e-c4c2-4eff-877d-03df2af9d43c-utilities\") pod \"community-operators-mlg4m\" (UID: \"efdf993e-c4c2-4eff-877d-03df2af9d43c\") " pod="openshift-marketplace/community-operators-mlg4m" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.459909 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efdf993e-c4c2-4eff-877d-03df2af9d43c-catalog-content\") pod \"community-operators-mlg4m\" (UID: \"efdf993e-c4c2-4eff-877d-03df2af9d43c\") " pod="openshift-marketplace/community-operators-mlg4m" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.467547 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.477232 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h8tj2"] Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.504874 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8g24\" (UniqueName: \"kubernetes.io/projected/efdf993e-c4c2-4eff-877d-03df2af9d43c-kube-api-access-f8g24\") pod \"community-operators-mlg4m\" (UID: \"efdf993e-c4c2-4eff-877d-03df2af9d43c\") " pod="openshift-marketplace/community-operators-mlg4m" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.512244 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-qlr24" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.563918 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e636fb64-6a73-4a3d-84d3-d933046a68e0-catalog-content\") pod \"certified-operators-h8tj2\" (UID: \"e636fb64-6a73-4a3d-84d3-d933046a68e0\") " pod="openshift-marketplace/certified-operators-h8tj2" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.564177 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e636fb64-6a73-4a3d-84d3-d933046a68e0-utilities\") pod \"certified-operators-h8tj2\" (UID: \"e636fb64-6a73-4a3d-84d3-d933046a68e0\") " pod="openshift-marketplace/certified-operators-h8tj2" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.564346 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.564657 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9bzh\" (UniqueName: \"kubernetes.io/projected/e636fb64-6a73-4a3d-84d3-d933046a68e0-kube-api-access-v9bzh\") pod \"certified-operators-h8tj2\" (UID: \"e636fb64-6a73-4a3d-84d3-d933046a68e0\") " pod="openshift-marketplace/certified-operators-h8tj2" Nov 25 11:39:00 crc kubenswrapper[4706]: E1125 11:39:00.568161 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:39:01.068122212 +0000 UTC m=+149.982679773 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.676547 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.677247 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e636fb64-6a73-4a3d-84d3-d933046a68e0-catalog-content\") pod \"certified-operators-h8tj2\" (UID: \"e636fb64-6a73-4a3d-84d3-d933046a68e0\") " pod="openshift-marketplace/certified-operators-h8tj2" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.677347 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e636fb64-6a73-4a3d-84d3-d933046a68e0-utilities\") pod \"certified-operators-h8tj2\" (UID: \"e636fb64-6a73-4a3d-84d3-d933046a68e0\") " pod="openshift-marketplace/certified-operators-h8tj2" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.677431 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9bzh\" (UniqueName: \"kubernetes.io/projected/e636fb64-6a73-4a3d-84d3-d933046a68e0-kube-api-access-v9bzh\") pod \"certified-operators-h8tj2\" (UID: \"e636fb64-6a73-4a3d-84d3-d933046a68e0\") " pod="openshift-marketplace/certified-operators-h8tj2" Nov 25 11:39:00 crc kubenswrapper[4706]: E1125 11:39:00.677950 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:39:01.177924735 +0000 UTC m=+150.092482116 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.678555 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e636fb64-6a73-4a3d-84d3-d933046a68e0-catalog-content\") pod \"certified-operators-h8tj2\" (UID: \"e636fb64-6a73-4a3d-84d3-d933046a68e0\") " pod="openshift-marketplace/certified-operators-h8tj2" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.678873 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e636fb64-6a73-4a3d-84d3-d933046a68e0-utilities\") pod \"certified-operators-h8tj2\" (UID: \"e636fb64-6a73-4a3d-84d3-d933046a68e0\") " pod="openshift-marketplace/certified-operators-h8tj2" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.711406 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xwg8t"] Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.750045 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xwg8t"] Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.750520 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xwg8t" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.765769 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9bzh\" (UniqueName: \"kubernetes.io/projected/e636fb64-6a73-4a3d-84d3-d933046a68e0-kube-api-access-v9bzh\") pod \"certified-operators-h8tj2\" (UID: \"e636fb64-6a73-4a3d-84d3-d933046a68e0\") " pod="openshift-marketplace/certified-operators-h8tj2" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.781833 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59c181cc-6505-4d92-ab04-eaaa72b4389c-catalog-content\") pod \"community-operators-xwg8t\" (UID: \"59c181cc-6505-4d92-ab04-eaaa72b4389c\") " pod="openshift-marketplace/community-operators-xwg8t" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.785185 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gftdb\" (UniqueName: \"kubernetes.io/projected/59c181cc-6505-4d92-ab04-eaaa72b4389c-kube-api-access-gftdb\") pod \"community-operators-xwg8t\" (UID: \"59c181cc-6505-4d92-ab04-eaaa72b4389c\") " pod="openshift-marketplace/community-operators-xwg8t" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.785267 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.785334 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59c181cc-6505-4d92-ab04-eaaa72b4389c-utilities\") pod \"community-operators-xwg8t\" (UID: \"59c181cc-6505-4d92-ab04-eaaa72b4389c\") " pod="openshift-marketplace/community-operators-xwg8t" Nov 25 11:39:00 crc kubenswrapper[4706]: E1125 11:39:00.785836 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:39:01.285818005 +0000 UTC m=+150.200375386 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.814450 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h8tj2" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.869567 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vfhr5"] Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.869942 4706 patch_prober.go:28] interesting pod/router-default-5444994796-22mnp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 11:39:00 crc kubenswrapper[4706]: [-]has-synced failed: reason withheld Nov 25 11:39:00 crc kubenswrapper[4706]: [+]process-running ok Nov 25 11:39:00 crc kubenswrapper[4706]: healthz check failed Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.875413 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-22mnp" podUID="ab6319ba-e125-4775-83c3-c5624951d634" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.876873 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vfhr5"] Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.877029 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vfhr5" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.895100 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.895357 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gftdb\" (UniqueName: \"kubernetes.io/projected/59c181cc-6505-4d92-ab04-eaaa72b4389c-kube-api-access-gftdb\") pod \"community-operators-xwg8t\" (UID: \"59c181cc-6505-4d92-ab04-eaaa72b4389c\") " pod="openshift-marketplace/community-operators-xwg8t" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.895402 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59c181cc-6505-4d92-ab04-eaaa72b4389c-utilities\") pod \"community-operators-xwg8t\" (UID: \"59c181cc-6505-4d92-ab04-eaaa72b4389c\") " pod="openshift-marketplace/community-operators-xwg8t" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.895446 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59c181cc-6505-4d92-ab04-eaaa72b4389c-catalog-content\") pod \"community-operators-xwg8t\" (UID: \"59c181cc-6505-4d92-ab04-eaaa72b4389c\") " pod="openshift-marketplace/community-operators-xwg8t" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.895874 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59c181cc-6505-4d92-ab04-eaaa72b4389c-catalog-content\") pod \"community-operators-xwg8t\" (UID: \"59c181cc-6505-4d92-ab04-eaaa72b4389c\") " pod="openshift-marketplace/community-operators-xwg8t" Nov 25 11:39:00 crc kubenswrapper[4706]: E1125 11:39:00.895954 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:39:01.395935637 +0000 UTC m=+150.310493018 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.896487 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59c181cc-6505-4d92-ab04-eaaa72b4389c-utilities\") pod \"community-operators-xwg8t\" (UID: \"59c181cc-6505-4d92-ab04-eaaa72b4389c\") " pod="openshift-marketplace/community-operators-xwg8t" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.934924 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gftdb\" (UniqueName: \"kubernetes.io/projected/59c181cc-6505-4d92-ab04-eaaa72b4389c-kube-api-access-gftdb\") pod \"community-operators-xwg8t\" (UID: \"59c181cc-6505-4d92-ab04-eaaa72b4389c\") " pod="openshift-marketplace/community-operators-xwg8t" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.960842 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bthtj" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.998525 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c15a3609-095e-4cd9-ac60-1333da5a7f45-catalog-content\") pod \"certified-operators-vfhr5\" (UID: \"c15a3609-095e-4cd9-ac60-1333da5a7f45\") " pod="openshift-marketplace/certified-operators-vfhr5" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.998582 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c15a3609-095e-4cd9-ac60-1333da5a7f45-utilities\") pod \"certified-operators-vfhr5\" (UID: \"c15a3609-095e-4cd9-ac60-1333da5a7f45\") " pod="openshift-marketplace/certified-operators-vfhr5" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.998617 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwv2v\" (UniqueName: \"kubernetes.io/projected/c15a3609-095e-4cd9-ac60-1333da5a7f45-kube-api-access-lwv2v\") pod \"certified-operators-vfhr5\" (UID: \"c15a3609-095e-4cd9-ac60-1333da5a7f45\") " pod="openshift-marketplace/certified-operators-vfhr5" Nov 25 11:39:00 crc kubenswrapper[4706]: I1125 11:39:00.998714 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:39:00 crc kubenswrapper[4706]: E1125 11:39:00.999141 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:39:01.499121769 +0000 UTC m=+150.413679150 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.085043 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.087469 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mlg4m" Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.088359 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xwg8t" Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.101088 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.101571 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c15a3609-095e-4cd9-ac60-1333da5a7f45-utilities\") pod \"certified-operators-vfhr5\" (UID: \"c15a3609-095e-4cd9-ac60-1333da5a7f45\") " pod="openshift-marketplace/certified-operators-vfhr5" Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.101619 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwv2v\" (UniqueName: \"kubernetes.io/projected/c15a3609-095e-4cd9-ac60-1333da5a7f45-kube-api-access-lwv2v\") pod \"certified-operators-vfhr5\" (UID: \"c15a3609-095e-4cd9-ac60-1333da5a7f45\") " pod="openshift-marketplace/certified-operators-vfhr5" Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.101753 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c15a3609-095e-4cd9-ac60-1333da5a7f45-catalog-content\") pod \"certified-operators-vfhr5\" (UID: \"c15a3609-095e-4cd9-ac60-1333da5a7f45\") " pod="openshift-marketplace/certified-operators-vfhr5" Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.102389 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c15a3609-095e-4cd9-ac60-1333da5a7f45-catalog-content\") pod \"certified-operators-vfhr5\" (UID: \"c15a3609-095e-4cd9-ac60-1333da5a7f45\") " pod="openshift-marketplace/certified-operators-vfhr5" Nov 25 11:39:01 crc kubenswrapper[4706]: E1125 11:39:01.102512 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:39:01.602486107 +0000 UTC m=+150.517043488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.102832 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c15a3609-095e-4cd9-ac60-1333da5a7f45-utilities\") pod \"certified-operators-vfhr5\" (UID: \"c15a3609-095e-4cd9-ac60-1333da5a7f45\") " pod="openshift-marketplace/certified-operators-vfhr5" Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.126024 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.126094 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.135793 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwv2v\" (UniqueName: \"kubernetes.io/projected/c15a3609-095e-4cd9-ac60-1333da5a7f45-kube-api-access-lwv2v\") pod \"certified-operators-vfhr5\" (UID: \"c15a3609-095e-4cd9-ac60-1333da5a7f45\") " pod="openshift-marketplace/certified-operators-vfhr5" Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.205074 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:39:01 crc kubenswrapper[4706]: E1125 11:39:01.208097 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:39:01.708075435 +0000 UTC m=+150.622632816 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.225024 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vfhr5" Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.310981 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:39:01 crc kubenswrapper[4706]: E1125 11:39:01.311758 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:39:01.811729101 +0000 UTC m=+150.726286482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.413707 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:39:01 crc kubenswrapper[4706]: E1125 11:39:01.414188 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:39:01.914169433 +0000 UTC m=+150.828726814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.452986 4706 generic.go:334] "Generic (PLEG): container finished" podID="51a87a4e-3d58-48e0-b455-292aa206e149" containerID="2c5dfa9cb2ce5d6cbb777e4b005be38591922269782460a54c83a0a317b49885" exitCode=0 Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.453059 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401170-s4f7r" event={"ID":"51a87a4e-3d58-48e0-b455-292aa206e149","Type":"ContainerDied","Data":"2c5dfa9cb2ce5d6cbb777e4b005be38591922269782460a54c83a0a317b49885"} Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.470797 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jsj27" event={"ID":"d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a","Type":"ContainerStarted","Data":"1540979b12f81501bf804b1bc9a0f53dbae88f64f1f3ab5d45df6a47ceb33308"} Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.494378 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"710db8f563a75aaa469f622372449ce8d22e687e66dd513adea8e33dbebc4bbc"} Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.494478 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"d9fe3fac9d186fe7744fb035ad30cd1de552908e7bdc3119f39ff9cab645a8db"} Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.503873 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"51dcd3ac9ed5146e2e9f14318eea13e8cc2487078a9452e18c7bce2b78548669"} Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.503929 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9ff9efe43a1f84eeb2aaa5fb4af533c2f8683528020061fa64e3329a50745a80"} Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.504603 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.509057 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"1c702dffd282551269bc12d800da28a8ca21f7053cabdaf59c47bfd54e3c12d7"} Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.509091 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"69bce07cf61c50d03edddb6337cbae0dc11f68107ab645d02ad1b55d11714a3e"} Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.516226 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:39:01 crc kubenswrapper[4706]: E1125 11:39:01.518229 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:39:02.018202539 +0000 UTC m=+150.932759920 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.526218 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-jsj27" podStartSLOduration=129.526187656 podStartE2EDuration="2m9.526187656s" podCreationTimestamp="2025-11-25 11:36:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:39:01.522941628 +0000 UTC m=+150.437499019" watchObservedRunningTime="2025-11-25 11:39:01.526187656 +0000 UTC m=+150.440745037" Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.562075 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h8tj2"] Nov 25 11:39:01 crc kubenswrapper[4706]: W1125 11:39:01.599693 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode636fb64_6a73_4a3d_84d3_d933046a68e0.slice/crio-2550fdcb1b25857124bf5bc2b13b18a76b7679e44616244a0ed5c1d3a1aefdf1 WatchSource:0}: Error finding container 2550fdcb1b25857124bf5bc2b13b18a76b7679e44616244a0ed5c1d3a1aefdf1: Status 404 returned error can't find the container with id 2550fdcb1b25857124bf5bc2b13b18a76b7679e44616244a0ed5c1d3a1aefdf1 Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.619721 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:39:01 crc kubenswrapper[4706]: E1125 11:39:01.624606 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:39:02.124563918 +0000 UTC m=+151.039121479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.656140 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mlg4m"] Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.721331 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:39:01 crc kubenswrapper[4706]: E1125 11:39:01.721725 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:39:02.221699727 +0000 UTC m=+151.136257118 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.757252 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xwg8t"] Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.829048 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:39:01 crc kubenswrapper[4706]: E1125 11:39:01.829708 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:39:02.32969029 +0000 UTC m=+151.244247671 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.868168 4706 patch_prober.go:28] interesting pod/router-default-5444994796-22mnp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 11:39:01 crc kubenswrapper[4706]: [-]has-synced failed: reason withheld Nov 25 11:39:01 crc kubenswrapper[4706]: [+]process-running ok Nov 25 11:39:01 crc kubenswrapper[4706]: healthz check failed Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.868264 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-22mnp" podUID="ab6319ba-e125-4775-83c3-c5624951d634" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 11:39:01 crc kubenswrapper[4706]: I1125 11:39:01.939592 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:39:01 crc kubenswrapper[4706]: E1125 11:39:01.940472 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:39:02.440441688 +0000 UTC m=+151.354999069 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.048250 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:39:02 crc kubenswrapper[4706]: E1125 11:39:02.048966 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:39:02.548925795 +0000 UTC m=+151.463483186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.059745 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vfhr5"] Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.154338 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:39:02 crc kubenswrapper[4706]: E1125 11:39:02.154803 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:39:02.65478389 +0000 UTC m=+151.569341271 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.257826 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:39:02 crc kubenswrapper[4706]: E1125 11:39:02.258436 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:39:02.758401685 +0000 UTC m=+151.672959066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.359404 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:39:02 crc kubenswrapper[4706]: E1125 11:39:02.359670 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:39:02.859632354 +0000 UTC m=+151.774189735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.359890 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:39:02 crc kubenswrapper[4706]: E1125 11:39:02.360263 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:39:02.860246071 +0000 UTC m=+151.774803452 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.377066 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.378003 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.380574 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.381099 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.387290 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.398980 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-8f48m" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.401038 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-8f48m" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.401372 4706 patch_prober.go:28] interesting pod/console-f9d7485db-8f48m container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.401443 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-8f48m" podUID="028d4ff3-870d-4002-843f-5381587e28fc" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.413749 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.413787 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.420459 4706 patch_prober.go:28] interesting pod/downloads-7954f5f757-jd66x container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.420515 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-jd66x" podUID="bf1352d3-1ee8-4c51-8f45-b9fd8354fd07" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.420740 4706 patch_prober.go:28] interesting pod/downloads-7954f5f757-jd66x container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.420758 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jd66x" podUID="bf1352d3-1ee8-4c51-8f45-b9fd8354fd07" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.430097 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.461266 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jx6l5"] Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.461795 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.462210 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c134187c-5e1c-4da1-be12-e5273da1b5f3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c134187c-5e1c-4da1-be12-e5273da1b5f3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.462296 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c134187c-5e1c-4da1-be12-e5273da1b5f3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c134187c-5e1c-4da1-be12-e5273da1b5f3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.463055 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jx6l5" Nov 25 11:39:02 crc kubenswrapper[4706]: E1125 11:39:02.463208 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:39:02.963188577 +0000 UTC m=+151.877745958 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.472136 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.473013 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jx6l5"] Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.503215 4706 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.518561 4706 generic.go:334] "Generic (PLEG): container finished" podID="59c181cc-6505-4d92-ab04-eaaa72b4389c" containerID="2a974ec205669803dd6ae20eebe266b7f793fcd16b71de61403b57d3e43d0a12" exitCode=0 Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.518664 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xwg8t" event={"ID":"59c181cc-6505-4d92-ab04-eaaa72b4389c","Type":"ContainerDied","Data":"2a974ec205669803dd6ae20eebe266b7f793fcd16b71de61403b57d3e43d0a12"} Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.518705 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xwg8t" event={"ID":"59c181cc-6505-4d92-ab04-eaaa72b4389c","Type":"ContainerStarted","Data":"4ae7320175c8f0cf5828bc18da3d92aa6564a9019f7fbb5aef541b1824c85002"} Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.523551 4706 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.526227 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tgngn" event={"ID":"01c8d08c-1ad6-4048-92d4-98382da66cca","Type":"ContainerStarted","Data":"9ca1f6b1d0e31db0b5e8a8f434b102133f64b0fa3097b57ee1590c45edd8bc03"} Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.526283 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tgngn" event={"ID":"01c8d08c-1ad6-4048-92d4-98382da66cca","Type":"ContainerStarted","Data":"e26b8513e4e7d822101cede9d425700919ec24e726bb085ff8085a49fc7988d8"} Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.531200 4706 generic.go:334] "Generic (PLEG): container finished" podID="c15a3609-095e-4cd9-ac60-1333da5a7f45" containerID="2d164a47397d0b89c02c25552ccf71dac7a3cbe89710373c7966766782a0a727" exitCode=0 Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.531293 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfhr5" event={"ID":"c15a3609-095e-4cd9-ac60-1333da5a7f45","Type":"ContainerDied","Data":"2d164a47397d0b89c02c25552ccf71dac7a3cbe89710373c7966766782a0a727"} Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.531349 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfhr5" event={"ID":"c15a3609-095e-4cd9-ac60-1333da5a7f45","Type":"ContainerStarted","Data":"f9717e5d106a91076b552b6bf905bfb8a33c3faf193953b8f308b9f06a7ef33c"} Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.543426 4706 generic.go:334] "Generic (PLEG): container finished" podID="efdf993e-c4c2-4eff-877d-03df2af9d43c" containerID="9679f13319a663db6791ff433b25a3757b4c7799b8f52b1f54e03e0e8a6fcf1b" exitCode=0 Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.543511 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mlg4m" event={"ID":"efdf993e-c4c2-4eff-877d-03df2af9d43c","Type":"ContainerDied","Data":"9679f13319a663db6791ff433b25a3757b4c7799b8f52b1f54e03e0e8a6fcf1b"} Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.543551 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mlg4m" event={"ID":"efdf993e-c4c2-4eff-877d-03df2af9d43c","Type":"ContainerStarted","Data":"f4aead7d5ef1bc8752fc92d2b7a2326b4b4fb1ad6fb45c05a7b16fc68e243458"} Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.557920 4706 generic.go:334] "Generic (PLEG): container finished" podID="e636fb64-6a73-4a3d-84d3-d933046a68e0" containerID="081d1be1ebca978535c05824cf0d9f66230b878a5df3d54b53d44c7756beec9d" exitCode=0 Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.558153 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8tj2" event={"ID":"e636fb64-6a73-4a3d-84d3-d933046a68e0","Type":"ContainerDied","Data":"081d1be1ebca978535c05824cf0d9f66230b878a5df3d54b53d44c7756beec9d"} Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.558280 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8tj2" event={"ID":"e636fb64-6a73-4a3d-84d3-d933046a68e0","Type":"ContainerStarted","Data":"2550fdcb1b25857124bf5bc2b13b18a76b7679e44616244a0ed5c1d3a1aefdf1"} Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.566668 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05-catalog-content\") pod \"redhat-marketplace-jx6l5\" (UID: \"9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05\") " pod="openshift-marketplace/redhat-marketplace-jx6l5" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.566786 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05-utilities\") pod \"redhat-marketplace-jx6l5\" (UID: \"9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05\") " pod="openshift-marketplace/redhat-marketplace-jx6l5" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.566883 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c134187c-5e1c-4da1-be12-e5273da1b5f3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c134187c-5e1c-4da1-be12-e5273da1b5f3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.566939 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c134187c-5e1c-4da1-be12-e5273da1b5f3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c134187c-5e1c-4da1-be12-e5273da1b5f3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.566979 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.567017 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr9tf\" (UniqueName: \"kubernetes.io/projected/9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05-kube-api-access-vr9tf\") pod \"redhat-marketplace-jx6l5\" (UID: \"9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05\") " pod="openshift-marketplace/redhat-marketplace-jx6l5" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.568665 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c134187c-5e1c-4da1-be12-e5273da1b5f3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c134187c-5e1c-4da1-be12-e5273da1b5f3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 11:39:02 crc kubenswrapper[4706]: E1125 11:39:02.568761 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:39:03.068745295 +0000 UTC m=+151.983302686 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.570269 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kg9rr" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.596185 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c134187c-5e1c-4da1-be12-e5273da1b5f3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c134187c-5e1c-4da1-be12-e5273da1b5f3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.670210 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:39:02 crc kubenswrapper[4706]: E1125 11:39:02.670702 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:39:03.170675953 +0000 UTC m=+152.085233334 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.672041 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.672115 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr9tf\" (UniqueName: \"kubernetes.io/projected/9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05-kube-api-access-vr9tf\") pod \"redhat-marketplace-jx6l5\" (UID: \"9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05\") " pod="openshift-marketplace/redhat-marketplace-jx6l5" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.672378 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05-catalog-content\") pod \"redhat-marketplace-jx6l5\" (UID: \"9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05\") " pod="openshift-marketplace/redhat-marketplace-jx6l5" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.672701 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05-utilities\") pod \"redhat-marketplace-jx6l5\" (UID: \"9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05\") " pod="openshift-marketplace/redhat-marketplace-jx6l5" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.681603 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05-catalog-content\") pod \"redhat-marketplace-jx6l5\" (UID: \"9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05\") " pod="openshift-marketplace/redhat-marketplace-jx6l5" Nov 25 11:39:02 crc kubenswrapper[4706]: E1125 11:39:02.682254 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:39:03.182235447 +0000 UTC m=+152.096792828 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.683341 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05-utilities\") pod \"redhat-marketplace-jx6l5\" (UID: \"9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05\") " pod="openshift-marketplace/redhat-marketplace-jx6l5" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.722132 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr9tf\" (UniqueName: \"kubernetes.io/projected/9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05-kube-api-access-vr9tf\") pod \"redhat-marketplace-jx6l5\" (UID: \"9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05\") " pod="openshift-marketplace/redhat-marketplace-jx6l5" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.731662 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.782352 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:39:02 crc kubenswrapper[4706]: E1125 11:39:02.783201 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:39:03.283178619 +0000 UTC m=+152.197736000 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.785482 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jx6l5" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.859513 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-flshn"] Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.866598 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-22mnp" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.866798 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-flshn" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.871789 4706 patch_prober.go:28] interesting pod/router-default-5444994796-22mnp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 11:39:02 crc kubenswrapper[4706]: [-]has-synced failed: reason withheld Nov 25 11:39:02 crc kubenswrapper[4706]: [+]process-running ok Nov 25 11:39:02 crc kubenswrapper[4706]: healthz check failed Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.872125 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-22mnp" podUID="ab6319ba-e125-4775-83c3-c5624951d634" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.885501 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:39:02 crc kubenswrapper[4706]: E1125 11:39:02.886107 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:39:03.386085694 +0000 UTC m=+152.300643075 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.891156 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-flshn"] Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.945153 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401170-s4f7r" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.987961 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.988368 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9c9w\" (UniqueName: \"kubernetes.io/projected/53b77c12-5969-4020-b040-f53ab95adaf3-kube-api-access-k9c9w\") pod \"redhat-marketplace-flshn\" (UID: \"53b77c12-5969-4020-b040-f53ab95adaf3\") " pod="openshift-marketplace/redhat-marketplace-flshn" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.988413 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53b77c12-5969-4020-b040-f53ab95adaf3-utilities\") pod \"redhat-marketplace-flshn\" (UID: \"53b77c12-5969-4020-b040-f53ab95adaf3\") " pod="openshift-marketplace/redhat-marketplace-flshn" Nov 25 11:39:02 crc kubenswrapper[4706]: I1125 11:39:02.988557 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53b77c12-5969-4020-b040-f53ab95adaf3-catalog-content\") pod \"redhat-marketplace-flshn\" (UID: \"53b77c12-5969-4020-b040-f53ab95adaf3\") " pod="openshift-marketplace/redhat-marketplace-flshn" Nov 25 11:39:02 crc kubenswrapper[4706]: E1125 11:39:02.990114 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 11:39:03.49009132 +0000 UTC m=+152.404648701 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.089109 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-954mw\" (UniqueName: \"kubernetes.io/projected/51a87a4e-3d58-48e0-b455-292aa206e149-kube-api-access-954mw\") pod \"51a87a4e-3d58-48e0-b455-292aa206e149\" (UID: \"51a87a4e-3d58-48e0-b455-292aa206e149\") " Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.089635 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51a87a4e-3d58-48e0-b455-292aa206e149-secret-volume\") pod \"51a87a4e-3d58-48e0-b455-292aa206e149\" (UID: \"51a87a4e-3d58-48e0-b455-292aa206e149\") " Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.089666 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51a87a4e-3d58-48e0-b455-292aa206e149-config-volume\") pod \"51a87a4e-3d58-48e0-b455-292aa206e149\" (UID: \"51a87a4e-3d58-48e0-b455-292aa206e149\") " Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.091464 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.091559 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51a87a4e-3d58-48e0-b455-292aa206e149-config-volume" (OuterVolumeSpecName: "config-volume") pod "51a87a4e-3d58-48e0-b455-292aa206e149" (UID: "51a87a4e-3d58-48e0-b455-292aa206e149"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.091636 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53b77c12-5969-4020-b040-f53ab95adaf3-catalog-content\") pod \"redhat-marketplace-flshn\" (UID: \"53b77c12-5969-4020-b040-f53ab95adaf3\") " pod="openshift-marketplace/redhat-marketplace-flshn" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.091745 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9c9w\" (UniqueName: \"kubernetes.io/projected/53b77c12-5969-4020-b040-f53ab95adaf3-kube-api-access-k9c9w\") pod \"redhat-marketplace-flshn\" (UID: \"53b77c12-5969-4020-b040-f53ab95adaf3\") " pod="openshift-marketplace/redhat-marketplace-flshn" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.091775 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53b77c12-5969-4020-b040-f53ab95adaf3-utilities\") pod \"redhat-marketplace-flshn\" (UID: \"53b77c12-5969-4020-b040-f53ab95adaf3\") " pod="openshift-marketplace/redhat-marketplace-flshn" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.091852 4706 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51a87a4e-3d58-48e0-b455-292aa206e149-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 11:39:03 crc kubenswrapper[4706]: E1125 11:39:03.091953 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 11:39:03.591932216 +0000 UTC m=+152.506489787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qf2c" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.092338 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53b77c12-5969-4020-b040-f53ab95adaf3-utilities\") pod \"redhat-marketplace-flshn\" (UID: \"53b77c12-5969-4020-b040-f53ab95adaf3\") " pod="openshift-marketplace/redhat-marketplace-flshn" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.093948 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53b77c12-5969-4020-b040-f53ab95adaf3-catalog-content\") pod \"redhat-marketplace-flshn\" (UID: \"53b77c12-5969-4020-b040-f53ab95adaf3\") " pod="openshift-marketplace/redhat-marketplace-flshn" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.102730 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51a87a4e-3d58-48e0-b455-292aa206e149-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "51a87a4e-3d58-48e0-b455-292aa206e149" (UID: "51a87a4e-3d58-48e0-b455-292aa206e149"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.106114 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51a87a4e-3d58-48e0-b455-292aa206e149-kube-api-access-954mw" (OuterVolumeSpecName: "kube-api-access-954mw") pod "51a87a4e-3d58-48e0-b455-292aa206e149" (UID: "51a87a4e-3d58-48e0-b455-292aa206e149"). InnerVolumeSpecName "kube-api-access-954mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.111244 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9c9w\" (UniqueName: \"kubernetes.io/projected/53b77c12-5969-4020-b040-f53ab95adaf3-kube-api-access-k9c9w\") pod \"redhat-marketplace-flshn\" (UID: \"53b77c12-5969-4020-b040-f53ab95adaf3\") " pod="openshift-marketplace/redhat-marketplace-flshn" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.113772 4706 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-25T11:39:02.503629716Z","Handler":null,"Name":""} Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.117591 4706 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.117634 4706 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.159991 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jx6l5"] Nov 25 11:39:03 crc kubenswrapper[4706]: W1125 11:39:03.173758 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ba1f6b2_ea89_4d9b_aad8_b18eaba9ed05.slice/crio-788bb0d564fd9bb151565b994bdae9610d6004ee5bf7cf0923037ccb47a32c8d WatchSource:0}: Error finding container 788bb0d564fd9bb151565b994bdae9610d6004ee5bf7cf0923037ccb47a32c8d: Status 404 returned error can't find the container with id 788bb0d564fd9bb151565b994bdae9610d6004ee5bf7cf0923037ccb47a32c8d Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.193564 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.194034 4706 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51a87a4e-3d58-48e0-b455-292aa206e149-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.194049 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-954mw\" (UniqueName: \"kubernetes.io/projected/51a87a4e-3d58-48e0-b455-292aa206e149-kube-api-access-954mw\") on node \"crc\" DevicePath \"\"" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.200429 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.205065 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-flshn" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.208580 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.210080 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zn9dk" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.297160 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.315661 4706 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.315710 4706 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.394763 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qf2c\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.443890 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.443946 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.457851 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qb6fx"] Nov 25 11:39:03 crc kubenswrapper[4706]: E1125 11:39:03.458137 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51a87a4e-3d58-48e0-b455-292aa206e149" containerName="collect-profiles" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.458151 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="51a87a4e-3d58-48e0-b455-292aa206e149" containerName="collect-profiles" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.459260 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="51a87a4e-3d58-48e0-b455-292aa206e149" containerName="collect-profiles" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.460393 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qb6fx" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.463488 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.475820 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qb6fx"] Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.534429 4706 patch_prober.go:28] interesting pod/apiserver-76f77b778f-jsj27 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 25 11:39:03 crc kubenswrapper[4706]: [+]log ok Nov 25 11:39:03 crc kubenswrapper[4706]: [+]etcd ok Nov 25 11:39:03 crc kubenswrapper[4706]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 25 11:39:03 crc kubenswrapper[4706]: [+]poststarthook/generic-apiserver-start-informers ok Nov 25 11:39:03 crc kubenswrapper[4706]: [+]poststarthook/max-in-flight-filter ok Nov 25 11:39:03 crc kubenswrapper[4706]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 25 11:39:03 crc kubenswrapper[4706]: [+]poststarthook/image.openshift.io-apiserver-caches ok Nov 25 11:39:03 crc kubenswrapper[4706]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Nov 25 11:39:03 crc kubenswrapper[4706]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Nov 25 11:39:03 crc kubenswrapper[4706]: [+]poststarthook/project.openshift.io-projectcache ok Nov 25 11:39:03 crc kubenswrapper[4706]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Nov 25 11:39:03 crc kubenswrapper[4706]: [+]poststarthook/openshift.io-startinformers ok Nov 25 11:39:03 crc kubenswrapper[4706]: [+]poststarthook/openshift.io-restmapperupdater ok Nov 25 11:39:03 crc kubenswrapper[4706]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 25 11:39:03 crc kubenswrapper[4706]: livez check failed Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.534514 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-jsj27" podUID="d4ec8c5d-e4c6-42d4-bf1c-1d3952ce6d7a" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.580466 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-flshn"] Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.581815 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.592992 4706 generic.go:334] "Generic (PLEG): container finished" podID="9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05" containerID="49f3f8273b9ea886cbb6338982b4b332704503478980beb0dadbd6a23517f7d5" exitCode=0 Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.593168 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jx6l5" event={"ID":"9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05","Type":"ContainerDied","Data":"49f3f8273b9ea886cbb6338982b4b332704503478980beb0dadbd6a23517f7d5"} Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.593279 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jx6l5" event={"ID":"9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05","Type":"ContainerStarted","Data":"788bb0d564fd9bb151565b994bdae9610d6004ee5bf7cf0923037ccb47a32c8d"} Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.606967 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/815eca00-0648-4421-8b14-0eb14056161b-catalog-content\") pod \"redhat-operators-qb6fx\" (UID: \"815eca00-0648-4421-8b14-0eb14056161b\") " pod="openshift-marketplace/redhat-operators-qb6fx" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.607066 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlkv9\" (UniqueName: \"kubernetes.io/projected/815eca00-0648-4421-8b14-0eb14056161b-kube-api-access-tlkv9\") pod \"redhat-operators-qb6fx\" (UID: \"815eca00-0648-4421-8b14-0eb14056161b\") " pod="openshift-marketplace/redhat-operators-qb6fx" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.607090 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/815eca00-0648-4421-8b14-0eb14056161b-utilities\") pod \"redhat-operators-qb6fx\" (UID: \"815eca00-0648-4421-8b14-0eb14056161b\") " pod="openshift-marketplace/redhat-operators-qb6fx" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.618468 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tgngn" event={"ID":"01c8d08c-1ad6-4048-92d4-98382da66cca","Type":"ContainerStarted","Data":"4e62dd826a3e22b8f1fa9a817bc35f495461af71b3bf2280a033468d70298b21"} Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.632477 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401170-s4f7r" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.633630 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401170-s4f7r" event={"ID":"51a87a4e-3d58-48e0-b455-292aa206e149","Type":"ContainerDied","Data":"b727522bcf0ec2f175590fc7acead1b492f2d29aba59e5bfa3e4e1debf11d23b"} Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.633719 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b727522bcf0ec2f175590fc7acead1b492f2d29aba59e5bfa3e4e1debf11d23b" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.641329 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c134187c-5e1c-4da1-be12-e5273da1b5f3","Type":"ContainerStarted","Data":"118f0d8199a3172964455ab912a2e2cecae3508451c0d341d382b5dc3975d3eb"} Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.661109 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-tgngn" podStartSLOduration=13.661085665 podStartE2EDuration="13.661085665s" podCreationTimestamp="2025-11-25 11:38:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:39:03.658182427 +0000 UTC m=+152.572739818" watchObservedRunningTime="2025-11-25 11:39:03.661085665 +0000 UTC m=+152.575643046" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.714723 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlkv9\" (UniqueName: \"kubernetes.io/projected/815eca00-0648-4421-8b14-0eb14056161b-kube-api-access-tlkv9\") pod \"redhat-operators-qb6fx\" (UID: \"815eca00-0648-4421-8b14-0eb14056161b\") " pod="openshift-marketplace/redhat-operators-qb6fx" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.715343 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/815eca00-0648-4421-8b14-0eb14056161b-utilities\") pod \"redhat-operators-qb6fx\" (UID: \"815eca00-0648-4421-8b14-0eb14056161b\") " pod="openshift-marketplace/redhat-operators-qb6fx" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.715630 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/815eca00-0648-4421-8b14-0eb14056161b-catalog-content\") pod \"redhat-operators-qb6fx\" (UID: \"815eca00-0648-4421-8b14-0eb14056161b\") " pod="openshift-marketplace/redhat-operators-qb6fx" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.724987 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/815eca00-0648-4421-8b14-0eb14056161b-utilities\") pod \"redhat-operators-qb6fx\" (UID: \"815eca00-0648-4421-8b14-0eb14056161b\") " pod="openshift-marketplace/redhat-operators-qb6fx" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.724984 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/815eca00-0648-4421-8b14-0eb14056161b-catalog-content\") pod \"redhat-operators-qb6fx\" (UID: \"815eca00-0648-4421-8b14-0eb14056161b\") " pod="openshift-marketplace/redhat-operators-qb6fx" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.752647 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlkv9\" (UniqueName: \"kubernetes.io/projected/815eca00-0648-4421-8b14-0eb14056161b-kube-api-access-tlkv9\") pod \"redhat-operators-qb6fx\" (UID: \"815eca00-0648-4421-8b14-0eb14056161b\") " pod="openshift-marketplace/redhat-operators-qb6fx" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.805929 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.806878 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.811534 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.816287 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.816605 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.854562 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tchjq"] Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.856139 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tchjq" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.866440 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qb6fx" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.879924 4706 patch_prober.go:28] interesting pod/router-default-5444994796-22mnp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 11:39:03 crc kubenswrapper[4706]: [-]has-synced failed: reason withheld Nov 25 11:39:03 crc kubenswrapper[4706]: [+]process-running ok Nov 25 11:39:03 crc kubenswrapper[4706]: healthz check failed Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.879989 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-22mnp" podUID="ab6319ba-e125-4775-83c3-c5624951d634" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.884669 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tchjq"] Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.917965 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d8344c5-e0b9-46b7-8ae1-b82c36588bbb-utilities\") pod \"redhat-operators-tchjq\" (UID: \"9d8344c5-e0b9-46b7-8ae1-b82c36588bbb\") " pod="openshift-marketplace/redhat-operators-tchjq" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.918030 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k66l\" (UniqueName: \"kubernetes.io/projected/9d8344c5-e0b9-46b7-8ae1-b82c36588bbb-kube-api-access-2k66l\") pod \"redhat-operators-tchjq\" (UID: \"9d8344c5-e0b9-46b7-8ae1-b82c36588bbb\") " pod="openshift-marketplace/redhat-operators-tchjq" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.918069 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6503703-bea5-49eb-84df-72a3fc483cfb-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e6503703-bea5-49eb-84df-72a3fc483cfb\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.918117 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d8344c5-e0b9-46b7-8ae1-b82c36588bbb-catalog-content\") pod \"redhat-operators-tchjq\" (UID: \"9d8344c5-e0b9-46b7-8ae1-b82c36588bbb\") " pod="openshift-marketplace/redhat-operators-tchjq" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.918182 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6503703-bea5-49eb-84df-72a3fc483cfb-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e6503703-bea5-49eb-84df-72a3fc483cfb\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 11:39:03 crc kubenswrapper[4706]: I1125 11:39:03.936111 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 25 11:39:04 crc kubenswrapper[4706]: I1125 11:39:04.019609 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d8344c5-e0b9-46b7-8ae1-b82c36588bbb-utilities\") pod \"redhat-operators-tchjq\" (UID: \"9d8344c5-e0b9-46b7-8ae1-b82c36588bbb\") " pod="openshift-marketplace/redhat-operators-tchjq" Nov 25 11:39:04 crc kubenswrapper[4706]: I1125 11:39:04.019671 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k66l\" (UniqueName: \"kubernetes.io/projected/9d8344c5-e0b9-46b7-8ae1-b82c36588bbb-kube-api-access-2k66l\") pod \"redhat-operators-tchjq\" (UID: \"9d8344c5-e0b9-46b7-8ae1-b82c36588bbb\") " pod="openshift-marketplace/redhat-operators-tchjq" Nov 25 11:39:04 crc kubenswrapper[4706]: I1125 11:39:04.019806 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6503703-bea5-49eb-84df-72a3fc483cfb-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e6503703-bea5-49eb-84df-72a3fc483cfb\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 11:39:04 crc kubenswrapper[4706]: I1125 11:39:04.019859 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d8344c5-e0b9-46b7-8ae1-b82c36588bbb-catalog-content\") pod \"redhat-operators-tchjq\" (UID: \"9d8344c5-e0b9-46b7-8ae1-b82c36588bbb\") " pod="openshift-marketplace/redhat-operators-tchjq" Nov 25 11:39:04 crc kubenswrapper[4706]: I1125 11:39:04.020106 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6503703-bea5-49eb-84df-72a3fc483cfb-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e6503703-bea5-49eb-84df-72a3fc483cfb\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 11:39:04 crc kubenswrapper[4706]: I1125 11:39:04.021038 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6503703-bea5-49eb-84df-72a3fc483cfb-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e6503703-bea5-49eb-84df-72a3fc483cfb\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 11:39:04 crc kubenswrapper[4706]: I1125 11:39:04.021904 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d8344c5-e0b9-46b7-8ae1-b82c36588bbb-catalog-content\") pod \"redhat-operators-tchjq\" (UID: \"9d8344c5-e0b9-46b7-8ae1-b82c36588bbb\") " pod="openshift-marketplace/redhat-operators-tchjq" Nov 25 11:39:04 crc kubenswrapper[4706]: I1125 11:39:04.022266 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d8344c5-e0b9-46b7-8ae1-b82c36588bbb-utilities\") pod \"redhat-operators-tchjq\" (UID: \"9d8344c5-e0b9-46b7-8ae1-b82c36588bbb\") " pod="openshift-marketplace/redhat-operators-tchjq" Nov 25 11:39:04 crc kubenswrapper[4706]: I1125 11:39:04.038704 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7qf2c"] Nov 25 11:39:04 crc kubenswrapper[4706]: I1125 11:39:04.059438 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k66l\" (UniqueName: \"kubernetes.io/projected/9d8344c5-e0b9-46b7-8ae1-b82c36588bbb-kube-api-access-2k66l\") pod \"redhat-operators-tchjq\" (UID: \"9d8344c5-e0b9-46b7-8ae1-b82c36588bbb\") " pod="openshift-marketplace/redhat-operators-tchjq" Nov 25 11:39:04 crc kubenswrapper[4706]: I1125 11:39:04.060454 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6503703-bea5-49eb-84df-72a3fc483cfb-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e6503703-bea5-49eb-84df-72a3fc483cfb\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 11:39:04 crc kubenswrapper[4706]: I1125 11:39:04.196465 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qb6fx"] Nov 25 11:39:04 crc kubenswrapper[4706]: I1125 11:39:04.212466 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 11:39:04 crc kubenswrapper[4706]: W1125 11:39:04.232112 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod815eca00_0648_4421_8b14_0eb14056161b.slice/crio-dc650eef70a93c07c8f236139bd933242eb8caafd96b2386b573034b2d6894a3 WatchSource:0}: Error finding container dc650eef70a93c07c8f236139bd933242eb8caafd96b2386b573034b2d6894a3: Status 404 returned error can't find the container with id dc650eef70a93c07c8f236139bd933242eb8caafd96b2386b573034b2d6894a3 Nov 25 11:39:04 crc kubenswrapper[4706]: I1125 11:39:04.232429 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tchjq" Nov 25 11:39:04 crc kubenswrapper[4706]: I1125 11:39:04.673947 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" event={"ID":"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66","Type":"ContainerStarted","Data":"e09b2f41097ce87e6433a7578157815b27efdb16c8ac3f81e5f1c2096f58d9bf"} Nov 25 11:39:04 crc kubenswrapper[4706]: I1125 11:39:04.674014 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" event={"ID":"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66","Type":"ContainerStarted","Data":"ec4d3097cdfc938345526a8823e6067012aba794681db1b4087ce1794e5886e4"} Nov 25 11:39:04 crc kubenswrapper[4706]: I1125 11:39:04.685194 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 25 11:39:04 crc kubenswrapper[4706]: I1125 11:39:04.689940 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qb6fx" event={"ID":"815eca00-0648-4421-8b14-0eb14056161b","Type":"ContainerStarted","Data":"dc650eef70a93c07c8f236139bd933242eb8caafd96b2386b573034b2d6894a3"} Nov 25 11:39:04 crc kubenswrapper[4706]: I1125 11:39:04.698988 4706 generic.go:334] "Generic (PLEG): container finished" podID="c134187c-5e1c-4da1-be12-e5273da1b5f3" containerID="dfc796e85ea655ad7062e01af807e6fd241c7820812306e437ace49c1819bfce" exitCode=0 Nov 25 11:39:04 crc kubenswrapper[4706]: I1125 11:39:04.699821 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c134187c-5e1c-4da1-be12-e5273da1b5f3","Type":"ContainerDied","Data":"dfc796e85ea655ad7062e01af807e6fd241c7820812306e437ace49c1819bfce"} Nov 25 11:39:04 crc kubenswrapper[4706]: I1125 11:39:04.706946 4706 generic.go:334] "Generic (PLEG): container finished" podID="53b77c12-5969-4020-b040-f53ab95adaf3" containerID="c1a79ce2a1418a38773a2307b33402cfef47a2d242eeb27a7e8b9031c3f513e1" exitCode=0 Nov 25 11:39:04 crc kubenswrapper[4706]: I1125 11:39:04.708758 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-flshn" event={"ID":"53b77c12-5969-4020-b040-f53ab95adaf3","Type":"ContainerDied","Data":"c1a79ce2a1418a38773a2307b33402cfef47a2d242eeb27a7e8b9031c3f513e1"} Nov 25 11:39:04 crc kubenswrapper[4706]: I1125 11:39:04.708811 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-flshn" event={"ID":"53b77c12-5969-4020-b040-f53ab95adaf3","Type":"ContainerStarted","Data":"d73bb6bdda999bd303f02a5a2ca151651adbdc5b634cc50d670c11945098e0f1"} Nov 25 11:39:04 crc kubenswrapper[4706]: W1125 11:39:04.789435 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode6503703_bea5_49eb_84df_72a3fc483cfb.slice/crio-0d6ab53fc6d708d11e4576e7a844f887d4e9e897f3d19cb4fa92080c57bd4e71 WatchSource:0}: Error finding container 0d6ab53fc6d708d11e4576e7a844f887d4e9e897f3d19cb4fa92080c57bd4e71: Status 404 returned error can't find the container with id 0d6ab53fc6d708d11e4576e7a844f887d4e9e897f3d19cb4fa92080c57bd4e71 Nov 25 11:39:04 crc kubenswrapper[4706]: I1125 11:39:04.793371 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tchjq"] Nov 25 11:39:04 crc kubenswrapper[4706]: I1125 11:39:04.869248 4706 patch_prober.go:28] interesting pod/router-default-5444994796-22mnp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 11:39:04 crc kubenswrapper[4706]: [-]has-synced failed: reason withheld Nov 25 11:39:04 crc kubenswrapper[4706]: [+]process-running ok Nov 25 11:39:04 crc kubenswrapper[4706]: healthz check failed Nov 25 11:39:04 crc kubenswrapper[4706]: I1125 11:39:04.869344 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-22mnp" podUID="ab6319ba-e125-4775-83c3-c5624951d634" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 11:39:05 crc kubenswrapper[4706]: I1125 11:39:05.739008 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e6503703-bea5-49eb-84df-72a3fc483cfb","Type":"ContainerStarted","Data":"4e5472856377864f7272f4d52116c9c48854b513053f37aa9cb5c08cb04d97fc"} Nov 25 11:39:05 crc kubenswrapper[4706]: I1125 11:39:05.739486 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e6503703-bea5-49eb-84df-72a3fc483cfb","Type":"ContainerStarted","Data":"0d6ab53fc6d708d11e4576e7a844f887d4e9e897f3d19cb4fa92080c57bd4e71"} Nov 25 11:39:05 crc kubenswrapper[4706]: I1125 11:39:05.741420 4706 generic.go:334] "Generic (PLEG): container finished" podID="815eca00-0648-4421-8b14-0eb14056161b" containerID="f01485fcf492d85ef54a1f990172ab6d37d9e221169b0bb4bc8faada3c9544e1" exitCode=0 Nov 25 11:39:05 crc kubenswrapper[4706]: I1125 11:39:05.741516 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qb6fx" event={"ID":"815eca00-0648-4421-8b14-0eb14056161b","Type":"ContainerDied","Data":"f01485fcf492d85ef54a1f990172ab6d37d9e221169b0bb4bc8faada3c9544e1"} Nov 25 11:39:05 crc kubenswrapper[4706]: I1125 11:39:05.757523 4706 generic.go:334] "Generic (PLEG): container finished" podID="9d8344c5-e0b9-46b7-8ae1-b82c36588bbb" containerID="40923999beaa55882f0fd504956e18153412e5fcef0004bbb60e420b52bee565" exitCode=0 Nov 25 11:39:05 crc kubenswrapper[4706]: I1125 11:39:05.757731 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tchjq" event={"ID":"9d8344c5-e0b9-46b7-8ae1-b82c36588bbb","Type":"ContainerDied","Data":"40923999beaa55882f0fd504956e18153412e5fcef0004bbb60e420b52bee565"} Nov 25 11:39:05 crc kubenswrapper[4706]: I1125 11:39:05.757801 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tchjq" event={"ID":"9d8344c5-e0b9-46b7-8ae1-b82c36588bbb","Type":"ContainerStarted","Data":"cc0fd32a7da972eb95928abd18ba8e3de0104d302ceb6d5a0d4d7f310be093f5"} Nov 25 11:39:05 crc kubenswrapper[4706]: I1125 11:39:05.758323 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:39:05 crc kubenswrapper[4706]: I1125 11:39:05.886816 4706 patch_prober.go:28] interesting pod/router-default-5444994796-22mnp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 11:39:05 crc kubenswrapper[4706]: [-]has-synced failed: reason withheld Nov 25 11:39:05 crc kubenswrapper[4706]: [+]process-running ok Nov 25 11:39:05 crc kubenswrapper[4706]: healthz check failed Nov 25 11:39:05 crc kubenswrapper[4706]: I1125 11:39:05.886939 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-22mnp" podUID="ab6319ba-e125-4775-83c3-c5624951d634" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 11:39:06 crc kubenswrapper[4706]: I1125 11:39:06.092191 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 11:39:06 crc kubenswrapper[4706]: I1125 11:39:06.107481 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" podStartSLOduration=133.107454485 podStartE2EDuration="2m13.107454485s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:39:05.827528991 +0000 UTC m=+154.742086412" watchObservedRunningTime="2025-11-25 11:39:06.107454485 +0000 UTC m=+155.022011876" Nov 25 11:39:06 crc kubenswrapper[4706]: I1125 11:39:06.183681 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c134187c-5e1c-4da1-be12-e5273da1b5f3-kubelet-dir\") pod \"c134187c-5e1c-4da1-be12-e5273da1b5f3\" (UID: \"c134187c-5e1c-4da1-be12-e5273da1b5f3\") " Nov 25 11:39:06 crc kubenswrapper[4706]: I1125 11:39:06.183750 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c134187c-5e1c-4da1-be12-e5273da1b5f3-kube-api-access\") pod \"c134187c-5e1c-4da1-be12-e5273da1b5f3\" (UID: \"c134187c-5e1c-4da1-be12-e5273da1b5f3\") " Nov 25 11:39:06 crc kubenswrapper[4706]: I1125 11:39:06.187510 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c134187c-5e1c-4da1-be12-e5273da1b5f3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c134187c-5e1c-4da1-be12-e5273da1b5f3" (UID: "c134187c-5e1c-4da1-be12-e5273da1b5f3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 11:39:06 crc kubenswrapper[4706]: I1125 11:39:06.195399 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c134187c-5e1c-4da1-be12-e5273da1b5f3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c134187c-5e1c-4da1-be12-e5273da1b5f3" (UID: "c134187c-5e1c-4da1-be12-e5273da1b5f3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:39:06 crc kubenswrapper[4706]: I1125 11:39:06.285285 4706 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c134187c-5e1c-4da1-be12-e5273da1b5f3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 11:39:06 crc kubenswrapper[4706]: I1125 11:39:06.285346 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c134187c-5e1c-4da1-be12-e5273da1b5f3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 11:39:06 crc kubenswrapper[4706]: I1125 11:39:06.769618 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c134187c-5e1c-4da1-be12-e5273da1b5f3","Type":"ContainerDied","Data":"118f0d8199a3172964455ab912a2e2cecae3508451c0d341d382b5dc3975d3eb"} Nov 25 11:39:06 crc kubenswrapper[4706]: I1125 11:39:06.769678 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="118f0d8199a3172964455ab912a2e2cecae3508451c0d341d382b5dc3975d3eb" Nov 25 11:39:06 crc kubenswrapper[4706]: I1125 11:39:06.769761 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 11:39:06 crc kubenswrapper[4706]: I1125 11:39:06.791340 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.791288889 podStartE2EDuration="3.791288889s" podCreationTimestamp="2025-11-25 11:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:39:06.790059266 +0000 UTC m=+155.704616667" watchObservedRunningTime="2025-11-25 11:39:06.791288889 +0000 UTC m=+155.705846280" Nov 25 11:39:06 crc kubenswrapper[4706]: I1125 11:39:06.866527 4706 patch_prober.go:28] interesting pod/router-default-5444994796-22mnp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 11:39:06 crc kubenswrapper[4706]: [-]has-synced failed: reason withheld Nov 25 11:39:06 crc kubenswrapper[4706]: [+]process-running ok Nov 25 11:39:06 crc kubenswrapper[4706]: healthz check failed Nov 25 11:39:06 crc kubenswrapper[4706]: I1125 11:39:06.866638 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-22mnp" podUID="ab6319ba-e125-4775-83c3-c5624951d634" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 11:39:07 crc kubenswrapper[4706]: I1125 11:39:07.796727 4706 generic.go:334] "Generic (PLEG): container finished" podID="e6503703-bea5-49eb-84df-72a3fc483cfb" containerID="4e5472856377864f7272f4d52116c9c48854b513053f37aa9cb5c08cb04d97fc" exitCode=0 Nov 25 11:39:07 crc kubenswrapper[4706]: I1125 11:39:07.796805 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e6503703-bea5-49eb-84df-72a3fc483cfb","Type":"ContainerDied","Data":"4e5472856377864f7272f4d52116c9c48854b513053f37aa9cb5c08cb04d97fc"} Nov 25 11:39:07 crc kubenswrapper[4706]: I1125 11:39:07.864513 4706 patch_prober.go:28] interesting pod/router-default-5444994796-22mnp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 11:39:07 crc kubenswrapper[4706]: [-]has-synced failed: reason withheld Nov 25 11:39:07 crc kubenswrapper[4706]: [+]process-running ok Nov 25 11:39:07 crc kubenswrapper[4706]: healthz check failed Nov 25 11:39:07 crc kubenswrapper[4706]: I1125 11:39:07.864619 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-22mnp" podUID="ab6319ba-e125-4775-83c3-c5624951d634" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 11:39:08 crc kubenswrapper[4706]: I1125 11:39:08.320083 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-wswtg" Nov 25 11:39:08 crc kubenswrapper[4706]: I1125 11:39:08.442986 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:39:08 crc kubenswrapper[4706]: I1125 11:39:08.448030 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-jsj27" Nov 25 11:39:08 crc kubenswrapper[4706]: I1125 11:39:08.869528 4706 patch_prober.go:28] interesting pod/router-default-5444994796-22mnp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 11:39:08 crc kubenswrapper[4706]: [-]has-synced failed: reason withheld Nov 25 11:39:08 crc kubenswrapper[4706]: [+]process-running ok Nov 25 11:39:08 crc kubenswrapper[4706]: healthz check failed Nov 25 11:39:08 crc kubenswrapper[4706]: I1125 11:39:08.870097 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-22mnp" podUID="ab6319ba-e125-4775-83c3-c5624951d634" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 11:39:09 crc kubenswrapper[4706]: I1125 11:39:09.866542 4706 patch_prober.go:28] interesting pod/router-default-5444994796-22mnp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 11:39:09 crc kubenswrapper[4706]: [-]has-synced failed: reason withheld Nov 25 11:39:09 crc kubenswrapper[4706]: [+]process-running ok Nov 25 11:39:09 crc kubenswrapper[4706]: healthz check failed Nov 25 11:39:09 crc kubenswrapper[4706]: I1125 11:39:09.866627 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-22mnp" podUID="ab6319ba-e125-4775-83c3-c5624951d634" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 11:39:10 crc kubenswrapper[4706]: I1125 11:39:10.866506 4706 patch_prober.go:28] interesting pod/router-default-5444994796-22mnp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 11:39:10 crc kubenswrapper[4706]: [-]has-synced failed: reason withheld Nov 25 11:39:10 crc kubenswrapper[4706]: [+]process-running ok Nov 25 11:39:10 crc kubenswrapper[4706]: healthz check failed Nov 25 11:39:10 crc kubenswrapper[4706]: I1125 11:39:10.866926 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-22mnp" podUID="ab6319ba-e125-4775-83c3-c5624951d634" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 11:39:11 crc kubenswrapper[4706]: I1125 11:39:11.865256 4706 patch_prober.go:28] interesting pod/router-default-5444994796-22mnp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 11:39:11 crc kubenswrapper[4706]: [-]has-synced failed: reason withheld Nov 25 11:39:11 crc kubenswrapper[4706]: [+]process-running ok Nov 25 11:39:11 crc kubenswrapper[4706]: healthz check failed Nov 25 11:39:11 crc kubenswrapper[4706]: I1125 11:39:11.865349 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-22mnp" podUID="ab6319ba-e125-4775-83c3-c5624951d634" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 11:39:12 crc kubenswrapper[4706]: I1125 11:39:12.396083 4706 patch_prober.go:28] interesting pod/console-f9d7485db-8f48m container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Nov 25 11:39:12 crc kubenswrapper[4706]: I1125 11:39:12.396168 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-8f48m" podUID="028d4ff3-870d-4002-843f-5381587e28fc" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Nov 25 11:39:12 crc kubenswrapper[4706]: I1125 11:39:12.421732 4706 patch_prober.go:28] interesting pod/downloads-7954f5f757-jd66x container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Nov 25 11:39:12 crc kubenswrapper[4706]: I1125 11:39:12.421732 4706 patch_prober.go:28] interesting pod/downloads-7954f5f757-jd66x container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Nov 25 11:39:12 crc kubenswrapper[4706]: I1125 11:39:12.421827 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-jd66x" podUID="bf1352d3-1ee8-4c51-8f45-b9fd8354fd07" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Nov 25 11:39:12 crc kubenswrapper[4706]: I1125 11:39:12.421855 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jd66x" podUID="bf1352d3-1ee8-4c51-8f45-b9fd8354fd07" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Nov 25 11:39:12 crc kubenswrapper[4706]: I1125 11:39:12.865981 4706 patch_prober.go:28] interesting pod/router-default-5444994796-22mnp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 11:39:12 crc kubenswrapper[4706]: [-]has-synced failed: reason withheld Nov 25 11:39:12 crc kubenswrapper[4706]: [+]process-running ok Nov 25 11:39:12 crc kubenswrapper[4706]: healthz check failed Nov 25 11:39:12 crc kubenswrapper[4706]: I1125 11:39:12.866098 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-22mnp" podUID="ab6319ba-e125-4775-83c3-c5624951d634" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 11:39:13 crc kubenswrapper[4706]: I1125 11:39:13.869138 4706 patch_prober.go:28] interesting pod/router-default-5444994796-22mnp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 11:39:13 crc kubenswrapper[4706]: [-]has-synced failed: reason withheld Nov 25 11:39:13 crc kubenswrapper[4706]: [+]process-running ok Nov 25 11:39:13 crc kubenswrapper[4706]: healthz check failed Nov 25 11:39:13 crc kubenswrapper[4706]: I1125 11:39:13.870001 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-22mnp" podUID="ab6319ba-e125-4775-83c3-c5624951d634" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 11:39:14 crc kubenswrapper[4706]: I1125 11:39:14.866566 4706 patch_prober.go:28] interesting pod/router-default-5444994796-22mnp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 11:39:14 crc kubenswrapper[4706]: [-]has-synced failed: reason withheld Nov 25 11:39:14 crc kubenswrapper[4706]: [+]process-running ok Nov 25 11:39:14 crc kubenswrapper[4706]: healthz check failed Nov 25 11:39:14 crc kubenswrapper[4706]: I1125 11:39:14.866647 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-22mnp" podUID="ab6319ba-e125-4775-83c3-c5624951d634" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 11:39:15 crc kubenswrapper[4706]: I1125 11:39:15.730858 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/14d69237-a4b7-43ea-ac81-f165eb532669-metrics-certs\") pod \"network-metrics-daemon-l99rd\" (UID: \"14d69237-a4b7-43ea-ac81-f165eb532669\") " pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:39:15 crc kubenswrapper[4706]: I1125 11:39:15.738586 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/14d69237-a4b7-43ea-ac81-f165eb532669-metrics-certs\") pod \"network-metrics-daemon-l99rd\" (UID: \"14d69237-a4b7-43ea-ac81-f165eb532669\") " pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:39:15 crc kubenswrapper[4706]: I1125 11:39:15.867601 4706 patch_prober.go:28] interesting pod/router-default-5444994796-22mnp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 11:39:15 crc kubenswrapper[4706]: [-]has-synced failed: reason withheld Nov 25 11:39:15 crc kubenswrapper[4706]: [+]process-running ok Nov 25 11:39:15 crc kubenswrapper[4706]: healthz check failed Nov 25 11:39:15 crc kubenswrapper[4706]: I1125 11:39:15.867684 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-22mnp" podUID="ab6319ba-e125-4775-83c3-c5624951d634" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 11:39:15 crc kubenswrapper[4706]: I1125 11:39:15.939329 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l99rd" Nov 25 11:39:16 crc kubenswrapper[4706]: I1125 11:39:16.704075 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 11:39:16 crc kubenswrapper[4706]: I1125 11:39:16.744325 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6503703-bea5-49eb-84df-72a3fc483cfb-kubelet-dir\") pod \"e6503703-bea5-49eb-84df-72a3fc483cfb\" (UID: \"e6503703-bea5-49eb-84df-72a3fc483cfb\") " Nov 25 11:39:16 crc kubenswrapper[4706]: I1125 11:39:16.744423 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6503703-bea5-49eb-84df-72a3fc483cfb-kube-api-access\") pod \"e6503703-bea5-49eb-84df-72a3fc483cfb\" (UID: \"e6503703-bea5-49eb-84df-72a3fc483cfb\") " Nov 25 11:39:16 crc kubenswrapper[4706]: I1125 11:39:16.744384 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6503703-bea5-49eb-84df-72a3fc483cfb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e6503703-bea5-49eb-84df-72a3fc483cfb" (UID: "e6503703-bea5-49eb-84df-72a3fc483cfb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 11:39:16 crc kubenswrapper[4706]: I1125 11:39:16.744880 4706 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6503703-bea5-49eb-84df-72a3fc483cfb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 11:39:16 crc kubenswrapper[4706]: I1125 11:39:16.748048 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6503703-bea5-49eb-84df-72a3fc483cfb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e6503703-bea5-49eb-84df-72a3fc483cfb" (UID: "e6503703-bea5-49eb-84df-72a3fc483cfb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:39:16 crc kubenswrapper[4706]: I1125 11:39:16.846357 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6503703-bea5-49eb-84df-72a3fc483cfb-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 11:39:16 crc kubenswrapper[4706]: I1125 11:39:16.859218 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e6503703-bea5-49eb-84df-72a3fc483cfb","Type":"ContainerDied","Data":"0d6ab53fc6d708d11e4576e7a844f887d4e9e897f3d19cb4fa92080c57bd4e71"} Nov 25 11:39:16 crc kubenswrapper[4706]: I1125 11:39:16.859268 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d6ab53fc6d708d11e4576e7a844f887d4e9e897f3d19cb4fa92080c57bd4e71" Nov 25 11:39:16 crc kubenswrapper[4706]: I1125 11:39:16.859775 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 11:39:16 crc kubenswrapper[4706]: I1125 11:39:16.865292 4706 patch_prober.go:28] interesting pod/router-default-5444994796-22mnp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 11:39:16 crc kubenswrapper[4706]: [+]has-synced ok Nov 25 11:39:16 crc kubenswrapper[4706]: [+]process-running ok Nov 25 11:39:16 crc kubenswrapper[4706]: healthz check failed Nov 25 11:39:16 crc kubenswrapper[4706]: I1125 11:39:16.865378 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-22mnp" podUID="ab6319ba-e125-4775-83c3-c5624951d634" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 11:39:17 crc kubenswrapper[4706]: I1125 11:39:17.865534 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-22mnp" Nov 25 11:39:17 crc kubenswrapper[4706]: I1125 11:39:17.868195 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-22mnp" Nov 25 11:39:22 crc kubenswrapper[4706]: I1125 11:39:22.439323 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-jd66x" Nov 25 11:39:22 crc kubenswrapper[4706]: I1125 11:39:22.481572 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-8f48m" Nov 25 11:39:22 crc kubenswrapper[4706]: I1125 11:39:22.490756 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-8f48m" Nov 25 11:39:23 crc kubenswrapper[4706]: I1125 11:39:23.589953 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:39:31 crc kubenswrapper[4706]: I1125 11:39:31.125029 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 11:39:31 crc kubenswrapper[4706]: I1125 11:39:31.125901 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 11:39:31 crc kubenswrapper[4706]: E1125 11:39:31.978068 4706 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 25 11:39:31 crc kubenswrapper[4706]: E1125 11:39:31.979085 4706 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gftdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-xwg8t_openshift-marketplace(59c181cc-6505-4d92-ab04-eaaa72b4389c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 11:39:31 crc kubenswrapper[4706]: E1125 11:39:31.980382 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-xwg8t" podUID="59c181cc-6505-4d92-ab04-eaaa72b4389c" Nov 25 11:39:33 crc kubenswrapper[4706]: I1125 11:39:33.515818 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rs94g" Nov 25 11:39:34 crc kubenswrapper[4706]: E1125 11:39:34.161499 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-xwg8t" podUID="59c181cc-6505-4d92-ab04-eaaa72b4389c" Nov 25 11:39:39 crc kubenswrapper[4706]: E1125 11:39:39.218731 4706 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 25 11:39:39 crc kubenswrapper[4706]: E1125 11:39:39.219203 4706 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k9c9w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-flshn_openshift-marketplace(53b77c12-5969-4020-b040-f53ab95adaf3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 11:39:39 crc kubenswrapper[4706]: E1125 11:39:39.220491 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-flshn" podUID="53b77c12-5969-4020-b040-f53ab95adaf3" Nov 25 11:39:40 crc kubenswrapper[4706]: I1125 11:39:40.258761 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 11:39:41 crc kubenswrapper[4706]: E1125 11:39:41.722269 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-flshn" podUID="53b77c12-5969-4020-b040-f53ab95adaf3" Nov 25 11:39:42 crc kubenswrapper[4706]: I1125 11:39:42.116071 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-l99rd"] Nov 25 11:39:44 crc kubenswrapper[4706]: E1125 11:39:44.399596 4706 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 25 11:39:44 crc kubenswrapper[4706]: E1125 11:39:44.399875 4706 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v9bzh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-h8tj2_openshift-marketplace(e636fb64-6a73-4a3d-84d3-d933046a68e0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 11:39:44 crc kubenswrapper[4706]: E1125 11:39:44.401065 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-h8tj2" podUID="e636fb64-6a73-4a3d-84d3-d933046a68e0" Nov 25 11:39:44 crc kubenswrapper[4706]: E1125 11:39:44.740178 4706 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 25 11:39:44 crc kubenswrapper[4706]: E1125 11:39:44.740381 4706 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f8g24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-mlg4m_openshift-marketplace(efdf993e-c4c2-4eff-877d-03df2af9d43c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 11:39:44 crc kubenswrapper[4706]: E1125 11:39:44.742271 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-mlg4m" podUID="efdf993e-c4c2-4eff-877d-03df2af9d43c" Nov 25 11:39:45 crc kubenswrapper[4706]: E1125 11:39:45.822262 4706 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 25 11:39:45 crc kubenswrapper[4706]: E1125 11:39:45.822512 4706 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lwv2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-vfhr5_openshift-marketplace(c15a3609-095e-4cd9-ac60-1333da5a7f45): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 11:39:45 crc kubenswrapper[4706]: E1125 11:39:45.823729 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-vfhr5" podUID="c15a3609-095e-4cd9-ac60-1333da5a7f45" Nov 25 11:39:47 crc kubenswrapper[4706]: E1125 11:39:47.880430 4706 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 25 11:39:47 crc kubenswrapper[4706]: E1125 11:39:47.880615 4706 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vr9tf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-jx6l5_openshift-marketplace(9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 11:39:47 crc kubenswrapper[4706]: E1125 11:39:47.881803 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-jx6l5" podUID="9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05" Nov 25 11:39:48 crc kubenswrapper[4706]: E1125 11:39:48.186441 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-h8tj2" podUID="e636fb64-6a73-4a3d-84d3-d933046a68e0" Nov 25 11:39:48 crc kubenswrapper[4706]: E1125 11:39:48.186482 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-mlg4m" podUID="efdf993e-c4c2-4eff-877d-03df2af9d43c" Nov 25 11:39:48 crc kubenswrapper[4706]: E1125 11:39:48.186535 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jx6l5" podUID="9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05" Nov 25 11:39:48 crc kubenswrapper[4706]: E1125 11:39:48.187432 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-vfhr5" podUID="c15a3609-095e-4cd9-ac60-1333da5a7f45" Nov 25 11:39:48 crc kubenswrapper[4706]: W1125 11:39:48.193765 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14d69237_a4b7_43ea_ac81_f165eb532669.slice/crio-75b8090dfd049942c725d1976aa0b64b3ce647e434a7bf4042f25c0169a83970 WatchSource:0}: Error finding container 75b8090dfd049942c725d1976aa0b64b3ce647e434a7bf4042f25c0169a83970: Status 404 returned error can't find the container with id 75b8090dfd049942c725d1976aa0b64b3ce647e434a7bf4042f25c0169a83970 Nov 25 11:39:48 crc kubenswrapper[4706]: E1125 11:39:48.208866 4706 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 25 11:39:48 crc kubenswrapper[4706]: E1125 11:39:48.209595 4706 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tlkv9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-qb6fx_openshift-marketplace(815eca00-0648-4421-8b14-0eb14056161b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 11:39:48 crc kubenswrapper[4706]: E1125 11:39:48.210775 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-qb6fx" podUID="815eca00-0648-4421-8b14-0eb14056161b" Nov 25 11:39:48 crc kubenswrapper[4706]: E1125 11:39:48.223599 4706 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 25 11:39:48 crc kubenswrapper[4706]: E1125 11:39:48.223754 4706 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2k66l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-tchjq_openshift-marketplace(9d8344c5-e0b9-46b7-8ae1-b82c36588bbb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 11:39:48 crc kubenswrapper[4706]: E1125 11:39:48.225532 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-tchjq" podUID="9d8344c5-e0b9-46b7-8ae1-b82c36588bbb" Nov 25 11:39:49 crc kubenswrapper[4706]: I1125 11:39:49.087320 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-l99rd" event={"ID":"14d69237-a4b7-43ea-ac81-f165eb532669","Type":"ContainerStarted","Data":"0ccdc7e6dbd4823bc48f793e560152149cc026d95888a5b74d186f0d72597dfd"} Nov 25 11:39:49 crc kubenswrapper[4706]: I1125 11:39:49.087392 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-l99rd" event={"ID":"14d69237-a4b7-43ea-ac81-f165eb532669","Type":"ContainerStarted","Data":"75b8090dfd049942c725d1976aa0b64b3ce647e434a7bf4042f25c0169a83970"} Nov 25 11:39:49 crc kubenswrapper[4706]: E1125 11:39:49.089243 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-qb6fx" podUID="815eca00-0648-4421-8b14-0eb14056161b" Nov 25 11:39:49 crc kubenswrapper[4706]: E1125 11:39:49.089657 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-tchjq" podUID="9d8344c5-e0b9-46b7-8ae1-b82c36588bbb" Nov 25 11:39:50 crc kubenswrapper[4706]: I1125 11:39:50.096687 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-l99rd" event={"ID":"14d69237-a4b7-43ea-ac81-f165eb532669","Type":"ContainerStarted","Data":"f1346f937f3f353d8eac03b5a4d6fe29dfb437dc365275cd40e592a03bcf12f6"} Nov 25 11:39:51 crc kubenswrapper[4706]: I1125 11:39:51.123210 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-l99rd" podStartSLOduration=178.123168384 podStartE2EDuration="2m58.123168384s" podCreationTimestamp="2025-11-25 11:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:39:51.119682727 +0000 UTC m=+200.034240108" watchObservedRunningTime="2025-11-25 11:39:51.123168384 +0000 UTC m=+200.037725765" Nov 25 11:40:01 crc kubenswrapper[4706]: I1125 11:40:01.125216 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 11:40:01 crc kubenswrapper[4706]: I1125 11:40:01.126021 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 11:40:01 crc kubenswrapper[4706]: I1125 11:40:01.126083 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" Nov 25 11:40:01 crc kubenswrapper[4706]: I1125 11:40:01.126675 4706 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38"} pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 11:40:01 crc kubenswrapper[4706]: I1125 11:40:01.126782 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" containerID="cri-o://86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38" gracePeriod=600 Nov 25 11:40:04 crc kubenswrapper[4706]: I1125 11:40:04.179178 4706 generic.go:334] "Generic (PLEG): container finished" podID="0930887a-320c-4506-8c9c-f94d6d64516a" containerID="86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38" exitCode=0 Nov 25 11:40:04 crc kubenswrapper[4706]: I1125 11:40:04.179274 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" event={"ID":"0930887a-320c-4506-8c9c-f94d6d64516a","Type":"ContainerDied","Data":"86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38"} Nov 25 11:40:05 crc kubenswrapper[4706]: I1125 11:40:05.196967 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" event={"ID":"0930887a-320c-4506-8c9c-f94d6d64516a","Type":"ContainerStarted","Data":"c43009691a1ca998131689b9f478affb1596618b922c6332af076407a2828da9"} Nov 25 11:40:05 crc kubenswrapper[4706]: I1125 11:40:05.200464 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xwg8t" event={"ID":"59c181cc-6505-4d92-ab04-eaaa72b4389c","Type":"ContainerStarted","Data":"63f04c26071ae7a262c75a4332d0a8b3eeba0032567931b4bbb34ac86c81784f"} Nov 25 11:40:06 crc kubenswrapper[4706]: I1125 11:40:06.213985 4706 generic.go:334] "Generic (PLEG): container finished" podID="59c181cc-6505-4d92-ab04-eaaa72b4389c" containerID="63f04c26071ae7a262c75a4332d0a8b3eeba0032567931b4bbb34ac86c81784f" exitCode=0 Nov 25 11:40:06 crc kubenswrapper[4706]: I1125 11:40:06.214062 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xwg8t" event={"ID":"59c181cc-6505-4d92-ab04-eaaa72b4389c","Type":"ContainerDied","Data":"63f04c26071ae7a262c75a4332d0a8b3eeba0032567931b4bbb34ac86c81784f"} Nov 25 11:40:27 crc kubenswrapper[4706]: I1125 11:40:27.329516 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xwg8t" event={"ID":"59c181cc-6505-4d92-ab04-eaaa72b4389c","Type":"ContainerStarted","Data":"0bdbdf5648d99443328256404107ecd03c3ccd322b8de780f100526992a41255"} Nov 25 11:40:27 crc kubenswrapper[4706]: I1125 11:40:27.332360 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-flshn" event={"ID":"53b77c12-5969-4020-b040-f53ab95adaf3","Type":"ContainerStarted","Data":"7774dba579cf9e255e66324b1a2d31c1dc5cb32452bcfc79c1b0c655035ec174"} Nov 25 11:40:27 crc kubenswrapper[4706]: I1125 11:40:27.338886 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jx6l5" event={"ID":"9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05","Type":"ContainerStarted","Data":"583951c291d09b4ed406d6dd4dfe30774f57214b98725ada6bf72913d2194118"} Nov 25 11:40:27 crc kubenswrapper[4706]: I1125 11:40:27.341128 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tchjq" event={"ID":"9d8344c5-e0b9-46b7-8ae1-b82c36588bbb","Type":"ContainerStarted","Data":"f98c06c0f2c2288bf8b1d01b56ce5b6dc1e85b1f0f8a30d32e8604449d58cc89"} Nov 25 11:40:27 crc kubenswrapper[4706]: I1125 11:40:27.342726 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfhr5" event={"ID":"c15a3609-095e-4cd9-ac60-1333da5a7f45","Type":"ContainerStarted","Data":"c42de6f3a9875fa1b8b279c129b93c33e63a6a238c17926d5b476474b0c26133"} Nov 25 11:40:27 crc kubenswrapper[4706]: I1125 11:40:27.344231 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mlg4m" event={"ID":"efdf993e-c4c2-4eff-877d-03df2af9d43c","Type":"ContainerStarted","Data":"467370b7fa0c392998f8fa597d67bc6089ee8572b45eac38169fd17a8eb6f01a"} Nov 25 11:40:27 crc kubenswrapper[4706]: I1125 11:40:27.349507 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8tj2" event={"ID":"e636fb64-6a73-4a3d-84d3-d933046a68e0","Type":"ContainerStarted","Data":"69ff74230ad41cff40ec5b7cf0e47f2b7a058935276609882c7535bfbd09f273"} Nov 25 11:40:27 crc kubenswrapper[4706]: I1125 11:40:27.353510 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xwg8t" podStartSLOduration=2.977956582 podStartE2EDuration="1m27.353491326s" podCreationTimestamp="2025-11-25 11:39:00 +0000 UTC" firstStartedPulling="2025-11-25 11:39:02.520687279 +0000 UTC m=+151.435244660" lastFinishedPulling="2025-11-25 11:40:26.896222023 +0000 UTC m=+235.810779404" observedRunningTime="2025-11-25 11:40:27.352101508 +0000 UTC m=+236.266658909" watchObservedRunningTime="2025-11-25 11:40:27.353491326 +0000 UTC m=+236.268048707" Nov 25 11:40:27 crc kubenswrapper[4706]: I1125 11:40:27.356756 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qb6fx" event={"ID":"815eca00-0648-4421-8b14-0eb14056161b","Type":"ContainerStarted","Data":"1f7eefe90709b30a55c2e963a42ec856b229ed16c653ca620f60f9b556822691"} Nov 25 11:40:28 crc kubenswrapper[4706]: I1125 11:40:28.364752 4706 generic.go:334] "Generic (PLEG): container finished" podID="9d8344c5-e0b9-46b7-8ae1-b82c36588bbb" containerID="f98c06c0f2c2288bf8b1d01b56ce5b6dc1e85b1f0f8a30d32e8604449d58cc89" exitCode=0 Nov 25 11:40:28 crc kubenswrapper[4706]: I1125 11:40:28.365228 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tchjq" event={"ID":"9d8344c5-e0b9-46b7-8ae1-b82c36588bbb","Type":"ContainerDied","Data":"f98c06c0f2c2288bf8b1d01b56ce5b6dc1e85b1f0f8a30d32e8604449d58cc89"} Nov 25 11:40:28 crc kubenswrapper[4706]: I1125 11:40:28.370249 4706 generic.go:334] "Generic (PLEG): container finished" podID="c15a3609-095e-4cd9-ac60-1333da5a7f45" containerID="c42de6f3a9875fa1b8b279c129b93c33e63a6a238c17926d5b476474b0c26133" exitCode=0 Nov 25 11:40:28 crc kubenswrapper[4706]: I1125 11:40:28.370377 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfhr5" event={"ID":"c15a3609-095e-4cd9-ac60-1333da5a7f45","Type":"ContainerDied","Data":"c42de6f3a9875fa1b8b279c129b93c33e63a6a238c17926d5b476474b0c26133"} Nov 25 11:40:28 crc kubenswrapper[4706]: I1125 11:40:28.382710 4706 generic.go:334] "Generic (PLEG): container finished" podID="efdf993e-c4c2-4eff-877d-03df2af9d43c" containerID="467370b7fa0c392998f8fa597d67bc6089ee8572b45eac38169fd17a8eb6f01a" exitCode=0 Nov 25 11:40:28 crc kubenswrapper[4706]: I1125 11:40:28.382814 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mlg4m" event={"ID":"efdf993e-c4c2-4eff-877d-03df2af9d43c","Type":"ContainerDied","Data":"467370b7fa0c392998f8fa597d67bc6089ee8572b45eac38169fd17a8eb6f01a"} Nov 25 11:40:28 crc kubenswrapper[4706]: I1125 11:40:28.392735 4706 generic.go:334] "Generic (PLEG): container finished" podID="e636fb64-6a73-4a3d-84d3-d933046a68e0" containerID="69ff74230ad41cff40ec5b7cf0e47f2b7a058935276609882c7535bfbd09f273" exitCode=0 Nov 25 11:40:28 crc kubenswrapper[4706]: I1125 11:40:28.392824 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8tj2" event={"ID":"e636fb64-6a73-4a3d-84d3-d933046a68e0","Type":"ContainerDied","Data":"69ff74230ad41cff40ec5b7cf0e47f2b7a058935276609882c7535bfbd09f273"} Nov 25 11:40:28 crc kubenswrapper[4706]: I1125 11:40:28.397671 4706 generic.go:334] "Generic (PLEG): container finished" podID="815eca00-0648-4421-8b14-0eb14056161b" containerID="1f7eefe90709b30a55c2e963a42ec856b229ed16c653ca620f60f9b556822691" exitCode=0 Nov 25 11:40:28 crc kubenswrapper[4706]: I1125 11:40:28.397737 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qb6fx" event={"ID":"815eca00-0648-4421-8b14-0eb14056161b","Type":"ContainerDied","Data":"1f7eefe90709b30a55c2e963a42ec856b229ed16c653ca620f60f9b556822691"} Nov 25 11:40:28 crc kubenswrapper[4706]: I1125 11:40:28.401809 4706 generic.go:334] "Generic (PLEG): container finished" podID="53b77c12-5969-4020-b040-f53ab95adaf3" containerID="7774dba579cf9e255e66324b1a2d31c1dc5cb32452bcfc79c1b0c655035ec174" exitCode=0 Nov 25 11:40:28 crc kubenswrapper[4706]: I1125 11:40:28.401882 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-flshn" event={"ID":"53b77c12-5969-4020-b040-f53ab95adaf3","Type":"ContainerDied","Data":"7774dba579cf9e255e66324b1a2d31c1dc5cb32452bcfc79c1b0c655035ec174"} Nov 25 11:40:28 crc kubenswrapper[4706]: I1125 11:40:28.413651 4706 generic.go:334] "Generic (PLEG): container finished" podID="9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05" containerID="583951c291d09b4ed406d6dd4dfe30774f57214b98725ada6bf72913d2194118" exitCode=0 Nov 25 11:40:28 crc kubenswrapper[4706]: I1125 11:40:28.413750 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jx6l5" event={"ID":"9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05","Type":"ContainerDied","Data":"583951c291d09b4ed406d6dd4dfe30774f57214b98725ada6bf72913d2194118"} Nov 25 11:40:29 crc kubenswrapper[4706]: I1125 11:40:29.422193 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tchjq" event={"ID":"9d8344c5-e0b9-46b7-8ae1-b82c36588bbb","Type":"ContainerStarted","Data":"7e753458a354064d4321f779a4c719d02f5cdf8aba2fba124bc94ef471d9bf30"} Nov 25 11:40:29 crc kubenswrapper[4706]: I1125 11:40:29.426049 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfhr5" event={"ID":"c15a3609-095e-4cd9-ac60-1333da5a7f45","Type":"ContainerStarted","Data":"58775998f83aa5b7f26011b2b755ccce8f67b57099b26c1babbaa3d41bd41150"} Nov 25 11:40:29 crc kubenswrapper[4706]: I1125 11:40:29.428812 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mlg4m" event={"ID":"efdf993e-c4c2-4eff-877d-03df2af9d43c","Type":"ContainerStarted","Data":"7da57e8e131a4bc2ca553fae2ec9034b55706ee63e4b9975717ee1758a3beca1"} Nov 25 11:40:29 crc kubenswrapper[4706]: I1125 11:40:29.432254 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8tj2" event={"ID":"e636fb64-6a73-4a3d-84d3-d933046a68e0","Type":"ContainerStarted","Data":"0c2ca8bb53141a7272695b9963d4aea3ea3329aa2f7b6ab873904a25d0211997"} Nov 25 11:40:29 crc kubenswrapper[4706]: I1125 11:40:29.436084 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qb6fx" event={"ID":"815eca00-0648-4421-8b14-0eb14056161b","Type":"ContainerStarted","Data":"ffbc72cbf8c7c250bb4c30e3ede421c474934e8e926882dc57dc32473807d031"} Nov 25 11:40:29 crc kubenswrapper[4706]: I1125 11:40:29.438904 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-flshn" event={"ID":"53b77c12-5969-4020-b040-f53ab95adaf3","Type":"ContainerStarted","Data":"803bc3b8d086e6e7af64a13c691f077ab0cc0468c42aafb446208b694523445c"} Nov 25 11:40:29 crc kubenswrapper[4706]: I1125 11:40:29.441999 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jx6l5" event={"ID":"9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05","Type":"ContainerStarted","Data":"3d0d6b37b1f6286c17cbde7d73aedbae98a877212a8a9f7323b0cb51be3f88df"} Nov 25 11:40:29 crc kubenswrapper[4706]: I1125 11:40:29.448888 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tchjq" podStartSLOduration=3.388734656 podStartE2EDuration="1m26.44886963s" podCreationTimestamp="2025-11-25 11:39:03 +0000 UTC" firstStartedPulling="2025-11-25 11:39:05.764735665 +0000 UTC m=+154.679293046" lastFinishedPulling="2025-11-25 11:40:28.824870639 +0000 UTC m=+237.739428020" observedRunningTime="2025-11-25 11:40:29.447836611 +0000 UTC m=+238.362393992" watchObservedRunningTime="2025-11-25 11:40:29.44886963 +0000 UTC m=+238.363427011" Nov 25 11:40:29 crc kubenswrapper[4706]: I1125 11:40:29.467339 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mlg4m" podStartSLOduration=3.172922627 podStartE2EDuration="1m29.467321912s" podCreationTimestamp="2025-11-25 11:39:00 +0000 UTC" firstStartedPulling="2025-11-25 11:39:02.549568124 +0000 UTC m=+151.464125505" lastFinishedPulling="2025-11-25 11:40:28.843967399 +0000 UTC m=+237.758524790" observedRunningTime="2025-11-25 11:40:29.464915195 +0000 UTC m=+238.379472576" watchObservedRunningTime="2025-11-25 11:40:29.467321912 +0000 UTC m=+238.381879293" Nov 25 11:40:29 crc kubenswrapper[4706]: I1125 11:40:29.487692 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-flshn" podStartSLOduration=3.083032922 podStartE2EDuration="1m27.487674277s" podCreationTimestamp="2025-11-25 11:39:02 +0000 UTC" firstStartedPulling="2025-11-25 11:39:04.723518223 +0000 UTC m=+153.638075604" lastFinishedPulling="2025-11-25 11:40:29.128159578 +0000 UTC m=+238.042716959" observedRunningTime="2025-11-25 11:40:29.484876209 +0000 UTC m=+238.399433580" watchObservedRunningTime="2025-11-25 11:40:29.487674277 +0000 UTC m=+238.402231658" Nov 25 11:40:29 crc kubenswrapper[4706]: I1125 11:40:29.513516 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h8tj2" podStartSLOduration=3.278621481 podStartE2EDuration="1m29.513492544s" podCreationTimestamp="2025-11-25 11:39:00 +0000 UTC" firstStartedPulling="2025-11-25 11:39:02.566467733 +0000 UTC m=+151.481025124" lastFinishedPulling="2025-11-25 11:40:28.801338806 +0000 UTC m=+237.715896187" observedRunningTime="2025-11-25 11:40:29.51120834 +0000 UTC m=+238.425765721" watchObservedRunningTime="2025-11-25 11:40:29.513492544 +0000 UTC m=+238.428049925" Nov 25 11:40:29 crc kubenswrapper[4706]: I1125 11:40:29.532055 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vfhr5" podStartSLOduration=3.130661674 podStartE2EDuration="1m29.532031518s" podCreationTimestamp="2025-11-25 11:39:00 +0000 UTC" firstStartedPulling="2025-11-25 11:39:02.53321782 +0000 UTC m=+151.447775201" lastFinishedPulling="2025-11-25 11:40:28.934587664 +0000 UTC m=+237.849145045" observedRunningTime="2025-11-25 11:40:29.529253531 +0000 UTC m=+238.443810912" watchObservedRunningTime="2025-11-25 11:40:29.532031518 +0000 UTC m=+238.446588899" Nov 25 11:40:29 crc kubenswrapper[4706]: I1125 11:40:29.558103 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qb6fx" podStartSLOduration=3.295063977 podStartE2EDuration="1m26.558084551s" podCreationTimestamp="2025-11-25 11:39:03 +0000 UTC" firstStartedPulling="2025-11-25 11:39:05.744993309 +0000 UTC m=+154.659550690" lastFinishedPulling="2025-11-25 11:40:29.008013883 +0000 UTC m=+237.922571264" observedRunningTime="2025-11-25 11:40:29.557062013 +0000 UTC m=+238.471619414" watchObservedRunningTime="2025-11-25 11:40:29.558084551 +0000 UTC m=+238.472641932" Nov 25 11:40:29 crc kubenswrapper[4706]: I1125 11:40:29.582259 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jx6l5" podStartSLOduration=2.157642323 podStartE2EDuration="1m27.582231112s" podCreationTimestamp="2025-11-25 11:39:02 +0000 UTC" firstStartedPulling="2025-11-25 11:39:03.60456644 +0000 UTC m=+152.519123821" lastFinishedPulling="2025-11-25 11:40:29.029155229 +0000 UTC m=+237.943712610" observedRunningTime="2025-11-25 11:40:29.577892871 +0000 UTC m=+238.492450252" watchObservedRunningTime="2025-11-25 11:40:29.582231112 +0000 UTC m=+238.496788503" Nov 25 11:40:30 crc kubenswrapper[4706]: I1125 11:40:30.816076 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h8tj2" Nov 25 11:40:30 crc kubenswrapper[4706]: I1125 11:40:30.816131 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h8tj2" Nov 25 11:40:31 crc kubenswrapper[4706]: I1125 11:40:31.012911 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h8tj2" Nov 25 11:40:31 crc kubenswrapper[4706]: I1125 11:40:31.087800 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mlg4m" Nov 25 11:40:31 crc kubenswrapper[4706]: I1125 11:40:31.087885 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mlg4m" Nov 25 11:40:31 crc kubenswrapper[4706]: I1125 11:40:31.089479 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xwg8t" Nov 25 11:40:31 crc kubenswrapper[4706]: I1125 11:40:31.089506 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xwg8t" Nov 25 11:40:31 crc kubenswrapper[4706]: I1125 11:40:31.136064 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xwg8t" Nov 25 11:40:31 crc kubenswrapper[4706]: I1125 11:40:31.225843 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vfhr5" Nov 25 11:40:31 crc kubenswrapper[4706]: I1125 11:40:31.225927 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vfhr5" Nov 25 11:40:31 crc kubenswrapper[4706]: I1125 11:40:31.265055 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vfhr5" Nov 25 11:40:32 crc kubenswrapper[4706]: I1125 11:40:32.121844 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-mlg4m" podUID="efdf993e-c4c2-4eff-877d-03df2af9d43c" containerName="registry-server" probeResult="failure" output=< Nov 25 11:40:32 crc kubenswrapper[4706]: timeout: failed to connect service ":50051" within 1s Nov 25 11:40:32 crc kubenswrapper[4706]: > Nov 25 11:40:32 crc kubenswrapper[4706]: I1125 11:40:32.396479 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ss2xd"] Nov 25 11:40:32 crc kubenswrapper[4706]: I1125 11:40:32.786951 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jx6l5" Nov 25 11:40:32 crc kubenswrapper[4706]: I1125 11:40:32.787024 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jx6l5" Nov 25 11:40:32 crc kubenswrapper[4706]: I1125 11:40:32.832891 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jx6l5" Nov 25 11:40:33 crc kubenswrapper[4706]: I1125 11:40:33.206595 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-flshn" Nov 25 11:40:33 crc kubenswrapper[4706]: I1125 11:40:33.206640 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-flshn" Nov 25 11:40:33 crc kubenswrapper[4706]: I1125 11:40:33.258481 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-flshn" Nov 25 11:40:33 crc kubenswrapper[4706]: I1125 11:40:33.866759 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qb6fx" Nov 25 11:40:33 crc kubenswrapper[4706]: I1125 11:40:33.866822 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qb6fx" Nov 25 11:40:34 crc kubenswrapper[4706]: I1125 11:40:34.233435 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tchjq" Nov 25 11:40:34 crc kubenswrapper[4706]: I1125 11:40:34.233833 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tchjq" Nov 25 11:40:34 crc kubenswrapper[4706]: I1125 11:40:34.906163 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qb6fx" podUID="815eca00-0648-4421-8b14-0eb14056161b" containerName="registry-server" probeResult="failure" output=< Nov 25 11:40:34 crc kubenswrapper[4706]: timeout: failed to connect service ":50051" within 1s Nov 25 11:40:34 crc kubenswrapper[4706]: > Nov 25 11:40:35 crc kubenswrapper[4706]: I1125 11:40:35.276387 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tchjq" podUID="9d8344c5-e0b9-46b7-8ae1-b82c36588bbb" containerName="registry-server" probeResult="failure" output=< Nov 25 11:40:35 crc kubenswrapper[4706]: timeout: failed to connect service ":50051" within 1s Nov 25 11:40:35 crc kubenswrapper[4706]: > Nov 25 11:40:40 crc kubenswrapper[4706]: I1125 11:40:40.861072 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h8tj2" Nov 25 11:40:41 crc kubenswrapper[4706]: I1125 11:40:41.126819 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xwg8t" Nov 25 11:40:41 crc kubenswrapper[4706]: I1125 11:40:41.128646 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mlg4m" Nov 25 11:40:41 crc kubenswrapper[4706]: I1125 11:40:41.169471 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xwg8t"] Nov 25 11:40:41 crc kubenswrapper[4706]: I1125 11:40:41.172801 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mlg4m" Nov 25 11:40:41 crc kubenswrapper[4706]: I1125 11:40:41.264700 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vfhr5" Nov 25 11:40:41 crc kubenswrapper[4706]: I1125 11:40:41.501594 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xwg8t" podUID="59c181cc-6505-4d92-ab04-eaaa72b4389c" containerName="registry-server" containerID="cri-o://0bdbdf5648d99443328256404107ecd03c3ccd322b8de780f100526992a41255" gracePeriod=2 Nov 25 11:40:42 crc kubenswrapper[4706]: I1125 11:40:42.833866 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jx6l5" Nov 25 11:40:43 crc kubenswrapper[4706]: I1125 11:40:43.250513 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-flshn" Nov 25 11:40:43 crc kubenswrapper[4706]: I1125 11:40:43.491832 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vfhr5"] Nov 25 11:40:43 crc kubenswrapper[4706]: I1125 11:40:43.492094 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vfhr5" podUID="c15a3609-095e-4cd9-ac60-1333da5a7f45" containerName="registry-server" containerID="cri-o://58775998f83aa5b7f26011b2b755ccce8f67b57099b26c1babbaa3d41bd41150" gracePeriod=2 Nov 25 11:40:43 crc kubenswrapper[4706]: I1125 11:40:43.514647 4706 generic.go:334] "Generic (PLEG): container finished" podID="59c181cc-6505-4d92-ab04-eaaa72b4389c" containerID="0bdbdf5648d99443328256404107ecd03c3ccd322b8de780f100526992a41255" exitCode=0 Nov 25 11:40:43 crc kubenswrapper[4706]: I1125 11:40:43.514718 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xwg8t" event={"ID":"59c181cc-6505-4d92-ab04-eaaa72b4389c","Type":"ContainerDied","Data":"0bdbdf5648d99443328256404107ecd03c3ccd322b8de780f100526992a41255"} Nov 25 11:40:43 crc kubenswrapper[4706]: I1125 11:40:43.884903 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xwg8t" Nov 25 11:40:43 crc kubenswrapper[4706]: I1125 11:40:43.913701 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qb6fx" Nov 25 11:40:43 crc kubenswrapper[4706]: I1125 11:40:43.955943 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qb6fx" Nov 25 11:40:44 crc kubenswrapper[4706]: I1125 11:40:44.021715 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gftdb\" (UniqueName: \"kubernetes.io/projected/59c181cc-6505-4d92-ab04-eaaa72b4389c-kube-api-access-gftdb\") pod \"59c181cc-6505-4d92-ab04-eaaa72b4389c\" (UID: \"59c181cc-6505-4d92-ab04-eaaa72b4389c\") " Nov 25 11:40:44 crc kubenswrapper[4706]: I1125 11:40:44.021829 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59c181cc-6505-4d92-ab04-eaaa72b4389c-catalog-content\") pod \"59c181cc-6505-4d92-ab04-eaaa72b4389c\" (UID: \"59c181cc-6505-4d92-ab04-eaaa72b4389c\") " Nov 25 11:40:44 crc kubenswrapper[4706]: I1125 11:40:44.021875 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59c181cc-6505-4d92-ab04-eaaa72b4389c-utilities\") pod \"59c181cc-6505-4d92-ab04-eaaa72b4389c\" (UID: \"59c181cc-6505-4d92-ab04-eaaa72b4389c\") " Nov 25 11:40:44 crc kubenswrapper[4706]: I1125 11:40:44.022997 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59c181cc-6505-4d92-ab04-eaaa72b4389c-utilities" (OuterVolumeSpecName: "utilities") pod "59c181cc-6505-4d92-ab04-eaaa72b4389c" (UID: "59c181cc-6505-4d92-ab04-eaaa72b4389c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:40:44 crc kubenswrapper[4706]: I1125 11:40:44.043349 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59c181cc-6505-4d92-ab04-eaaa72b4389c-kube-api-access-gftdb" (OuterVolumeSpecName: "kube-api-access-gftdb") pod "59c181cc-6505-4d92-ab04-eaaa72b4389c" (UID: "59c181cc-6505-4d92-ab04-eaaa72b4389c"). InnerVolumeSpecName "kube-api-access-gftdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:40:44 crc kubenswrapper[4706]: I1125 11:40:44.072349 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59c181cc-6505-4d92-ab04-eaaa72b4389c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "59c181cc-6505-4d92-ab04-eaaa72b4389c" (UID: "59c181cc-6505-4d92-ab04-eaaa72b4389c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:40:44 crc kubenswrapper[4706]: I1125 11:40:44.122988 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59c181cc-6505-4d92-ab04-eaaa72b4389c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 11:40:44 crc kubenswrapper[4706]: I1125 11:40:44.123032 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59c181cc-6505-4d92-ab04-eaaa72b4389c-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 11:40:44 crc kubenswrapper[4706]: I1125 11:40:44.123046 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gftdb\" (UniqueName: \"kubernetes.io/projected/59c181cc-6505-4d92-ab04-eaaa72b4389c-kube-api-access-gftdb\") on node \"crc\" DevicePath \"\"" Nov 25 11:40:44 crc kubenswrapper[4706]: I1125 11:40:44.269113 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tchjq" Nov 25 11:40:44 crc kubenswrapper[4706]: I1125 11:40:44.312416 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tchjq" Nov 25 11:40:44 crc kubenswrapper[4706]: I1125 11:40:44.521923 4706 generic.go:334] "Generic (PLEG): container finished" podID="c15a3609-095e-4cd9-ac60-1333da5a7f45" containerID="58775998f83aa5b7f26011b2b755ccce8f67b57099b26c1babbaa3d41bd41150" exitCode=0 Nov 25 11:40:44 crc kubenswrapper[4706]: I1125 11:40:44.521991 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfhr5" event={"ID":"c15a3609-095e-4cd9-ac60-1333da5a7f45","Type":"ContainerDied","Data":"58775998f83aa5b7f26011b2b755ccce8f67b57099b26c1babbaa3d41bd41150"} Nov 25 11:40:44 crc kubenswrapper[4706]: I1125 11:40:44.524466 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xwg8t" event={"ID":"59c181cc-6505-4d92-ab04-eaaa72b4389c","Type":"ContainerDied","Data":"4ae7320175c8f0cf5828bc18da3d92aa6564a9019f7fbb5aef541b1824c85002"} Nov 25 11:40:44 crc kubenswrapper[4706]: I1125 11:40:44.524546 4706 scope.go:117] "RemoveContainer" containerID="0bdbdf5648d99443328256404107ecd03c3ccd322b8de780f100526992a41255" Nov 25 11:40:44 crc kubenswrapper[4706]: I1125 11:40:44.524567 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xwg8t" Nov 25 11:40:44 crc kubenswrapper[4706]: I1125 11:40:44.539606 4706 scope.go:117] "RemoveContainer" containerID="63f04c26071ae7a262c75a4332d0a8b3eeba0032567931b4bbb34ac86c81784f" Nov 25 11:40:44 crc kubenswrapper[4706]: I1125 11:40:44.559983 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xwg8t"] Nov 25 11:40:44 crc kubenswrapper[4706]: I1125 11:40:44.566753 4706 scope.go:117] "RemoveContainer" containerID="2a974ec205669803dd6ae20eebe266b7f793fcd16b71de61403b57d3e43d0a12" Nov 25 11:40:44 crc kubenswrapper[4706]: I1125 11:40:44.569486 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xwg8t"] Nov 25 11:40:45 crc kubenswrapper[4706]: I1125 11:40:45.691990 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-flshn"] Nov 25 11:40:45 crc kubenswrapper[4706]: I1125 11:40:45.692364 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-flshn" podUID="53b77c12-5969-4020-b040-f53ab95adaf3" containerName="registry-server" containerID="cri-o://803bc3b8d086e6e7af64a13c691f077ab0cc0468c42aafb446208b694523445c" gracePeriod=2 Nov 25 11:40:45 crc kubenswrapper[4706]: I1125 11:40:45.889605 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vfhr5" Nov 25 11:40:45 crc kubenswrapper[4706]: I1125 11:40:45.929170 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59c181cc-6505-4d92-ab04-eaaa72b4389c" path="/var/lib/kubelet/pods/59c181cc-6505-4d92-ab04-eaaa72b4389c/volumes" Nov 25 11:40:46 crc kubenswrapper[4706]: I1125 11:40:46.049013 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c15a3609-095e-4cd9-ac60-1333da5a7f45-utilities\") pod \"c15a3609-095e-4cd9-ac60-1333da5a7f45\" (UID: \"c15a3609-095e-4cd9-ac60-1333da5a7f45\") " Nov 25 11:40:46 crc kubenswrapper[4706]: I1125 11:40:46.049184 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c15a3609-095e-4cd9-ac60-1333da5a7f45-catalog-content\") pod \"c15a3609-095e-4cd9-ac60-1333da5a7f45\" (UID: \"c15a3609-095e-4cd9-ac60-1333da5a7f45\") " Nov 25 11:40:46 crc kubenswrapper[4706]: I1125 11:40:46.049220 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwv2v\" (UniqueName: \"kubernetes.io/projected/c15a3609-095e-4cd9-ac60-1333da5a7f45-kube-api-access-lwv2v\") pod \"c15a3609-095e-4cd9-ac60-1333da5a7f45\" (UID: \"c15a3609-095e-4cd9-ac60-1333da5a7f45\") " Nov 25 11:40:46 crc kubenswrapper[4706]: I1125 11:40:46.050339 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c15a3609-095e-4cd9-ac60-1333da5a7f45-utilities" (OuterVolumeSpecName: "utilities") pod "c15a3609-095e-4cd9-ac60-1333da5a7f45" (UID: "c15a3609-095e-4cd9-ac60-1333da5a7f45"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:40:46 crc kubenswrapper[4706]: I1125 11:40:46.055632 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c15a3609-095e-4cd9-ac60-1333da5a7f45-kube-api-access-lwv2v" (OuterVolumeSpecName: "kube-api-access-lwv2v") pod "c15a3609-095e-4cd9-ac60-1333da5a7f45" (UID: "c15a3609-095e-4cd9-ac60-1333da5a7f45"). InnerVolumeSpecName "kube-api-access-lwv2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:40:46 crc kubenswrapper[4706]: I1125 11:40:46.098144 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c15a3609-095e-4cd9-ac60-1333da5a7f45-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c15a3609-095e-4cd9-ac60-1333da5a7f45" (UID: "c15a3609-095e-4cd9-ac60-1333da5a7f45"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:40:46 crc kubenswrapper[4706]: I1125 11:40:46.151287 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c15a3609-095e-4cd9-ac60-1333da5a7f45-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 11:40:46 crc kubenswrapper[4706]: I1125 11:40:46.151420 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c15a3609-095e-4cd9-ac60-1333da5a7f45-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 11:40:46 crc kubenswrapper[4706]: I1125 11:40:46.151444 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwv2v\" (UniqueName: \"kubernetes.io/projected/c15a3609-095e-4cd9-ac60-1333da5a7f45-kube-api-access-lwv2v\") on node \"crc\" DevicePath \"\"" Nov 25 11:40:46 crc kubenswrapper[4706]: I1125 11:40:46.538969 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfhr5" event={"ID":"c15a3609-095e-4cd9-ac60-1333da5a7f45","Type":"ContainerDied","Data":"f9717e5d106a91076b552b6bf905bfb8a33c3faf193953b8f308b9f06a7ef33c"} Nov 25 11:40:46 crc kubenswrapper[4706]: I1125 11:40:46.539054 4706 scope.go:117] "RemoveContainer" containerID="58775998f83aa5b7f26011b2b755ccce8f67b57099b26c1babbaa3d41bd41150" Nov 25 11:40:46 crc kubenswrapper[4706]: I1125 11:40:46.539476 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vfhr5" Nov 25 11:40:46 crc kubenswrapper[4706]: I1125 11:40:46.562357 4706 scope.go:117] "RemoveContainer" containerID="c42de6f3a9875fa1b8b279c129b93c33e63a6a238c17926d5b476474b0c26133" Nov 25 11:40:46 crc kubenswrapper[4706]: I1125 11:40:46.568069 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vfhr5"] Nov 25 11:40:46 crc kubenswrapper[4706]: I1125 11:40:46.572204 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vfhr5"] Nov 25 11:40:46 crc kubenswrapper[4706]: I1125 11:40:46.598744 4706 scope.go:117] "RemoveContainer" containerID="2d164a47397d0b89c02c25552ccf71dac7a3cbe89710373c7966766782a0a727" Nov 25 11:40:47 crc kubenswrapper[4706]: I1125 11:40:47.547816 4706 generic.go:334] "Generic (PLEG): container finished" podID="53b77c12-5969-4020-b040-f53ab95adaf3" containerID="803bc3b8d086e6e7af64a13c691f077ab0cc0468c42aafb446208b694523445c" exitCode=0 Nov 25 11:40:47 crc kubenswrapper[4706]: I1125 11:40:47.547881 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-flshn" event={"ID":"53b77c12-5969-4020-b040-f53ab95adaf3","Type":"ContainerDied","Data":"803bc3b8d086e6e7af64a13c691f077ab0cc0468c42aafb446208b694523445c"} Nov 25 11:40:47 crc kubenswrapper[4706]: I1125 11:40:47.915079 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-flshn" Nov 25 11:40:47 crc kubenswrapper[4706]: I1125 11:40:47.934661 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c15a3609-095e-4cd9-ac60-1333da5a7f45" path="/var/lib/kubelet/pods/c15a3609-095e-4cd9-ac60-1333da5a7f45/volumes" Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.076098 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53b77c12-5969-4020-b040-f53ab95adaf3-utilities\") pod \"53b77c12-5969-4020-b040-f53ab95adaf3\" (UID: \"53b77c12-5969-4020-b040-f53ab95adaf3\") " Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.076212 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9c9w\" (UniqueName: \"kubernetes.io/projected/53b77c12-5969-4020-b040-f53ab95adaf3-kube-api-access-k9c9w\") pod \"53b77c12-5969-4020-b040-f53ab95adaf3\" (UID: \"53b77c12-5969-4020-b040-f53ab95adaf3\") " Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.076339 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53b77c12-5969-4020-b040-f53ab95adaf3-catalog-content\") pod \"53b77c12-5969-4020-b040-f53ab95adaf3\" (UID: \"53b77c12-5969-4020-b040-f53ab95adaf3\") " Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.077261 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53b77c12-5969-4020-b040-f53ab95adaf3-utilities" (OuterVolumeSpecName: "utilities") pod "53b77c12-5969-4020-b040-f53ab95adaf3" (UID: "53b77c12-5969-4020-b040-f53ab95adaf3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.081270 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53b77c12-5969-4020-b040-f53ab95adaf3-kube-api-access-k9c9w" (OuterVolumeSpecName: "kube-api-access-k9c9w") pod "53b77c12-5969-4020-b040-f53ab95adaf3" (UID: "53b77c12-5969-4020-b040-f53ab95adaf3"). InnerVolumeSpecName "kube-api-access-k9c9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.095853 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53b77c12-5969-4020-b040-f53ab95adaf3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53b77c12-5969-4020-b040-f53ab95adaf3" (UID: "53b77c12-5969-4020-b040-f53ab95adaf3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.104016 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tchjq"] Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.104960 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tchjq" podUID="9d8344c5-e0b9-46b7-8ae1-b82c36588bbb" containerName="registry-server" containerID="cri-o://7e753458a354064d4321f779a4c719d02f5cdf8aba2fba124bc94ef471d9bf30" gracePeriod=2 Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.178455 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53b77c12-5969-4020-b040-f53ab95adaf3-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.178799 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9c9w\" (UniqueName: \"kubernetes.io/projected/53b77c12-5969-4020-b040-f53ab95adaf3-kube-api-access-k9c9w\") on node \"crc\" DevicePath \"\"" Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.178898 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53b77c12-5969-4020-b040-f53ab95adaf3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.445590 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tchjq" Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.556948 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-flshn" event={"ID":"53b77c12-5969-4020-b040-f53ab95adaf3","Type":"ContainerDied","Data":"d73bb6bdda999bd303f02a5a2ca151651adbdc5b634cc50d670c11945098e0f1"} Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.557014 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-flshn" Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.557025 4706 scope.go:117] "RemoveContainer" containerID="803bc3b8d086e6e7af64a13c691f077ab0cc0468c42aafb446208b694523445c" Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.559753 4706 generic.go:334] "Generic (PLEG): container finished" podID="9d8344c5-e0b9-46b7-8ae1-b82c36588bbb" containerID="7e753458a354064d4321f779a4c719d02f5cdf8aba2fba124bc94ef471d9bf30" exitCode=0 Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.559937 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tchjq" event={"ID":"9d8344c5-e0b9-46b7-8ae1-b82c36588bbb","Type":"ContainerDied","Data":"7e753458a354064d4321f779a4c719d02f5cdf8aba2fba124bc94ef471d9bf30"} Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.560127 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tchjq" event={"ID":"9d8344c5-e0b9-46b7-8ae1-b82c36588bbb","Type":"ContainerDied","Data":"cc0fd32a7da972eb95928abd18ba8e3de0104d302ceb6d5a0d4d7f310be093f5"} Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.559983 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tchjq" Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.576761 4706 scope.go:117] "RemoveContainer" containerID="7774dba579cf9e255e66324b1a2d31c1dc5cb32452bcfc79c1b0c655035ec174" Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.586334 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2k66l\" (UniqueName: \"kubernetes.io/projected/9d8344c5-e0b9-46b7-8ae1-b82c36588bbb-kube-api-access-2k66l\") pod \"9d8344c5-e0b9-46b7-8ae1-b82c36588bbb\" (UID: \"9d8344c5-e0b9-46b7-8ae1-b82c36588bbb\") " Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.586652 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d8344c5-e0b9-46b7-8ae1-b82c36588bbb-catalog-content\") pod \"9d8344c5-e0b9-46b7-8ae1-b82c36588bbb\" (UID: \"9d8344c5-e0b9-46b7-8ae1-b82c36588bbb\") " Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.586756 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d8344c5-e0b9-46b7-8ae1-b82c36588bbb-utilities\") pod \"9d8344c5-e0b9-46b7-8ae1-b82c36588bbb\" (UID: \"9d8344c5-e0b9-46b7-8ae1-b82c36588bbb\") " Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.587681 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d8344c5-e0b9-46b7-8ae1-b82c36588bbb-utilities" (OuterVolumeSpecName: "utilities") pod "9d8344c5-e0b9-46b7-8ae1-b82c36588bbb" (UID: "9d8344c5-e0b9-46b7-8ae1-b82c36588bbb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.592396 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-flshn"] Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.595837 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-flshn"] Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.606211 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d8344c5-e0b9-46b7-8ae1-b82c36588bbb-kube-api-access-2k66l" (OuterVolumeSpecName: "kube-api-access-2k66l") pod "9d8344c5-e0b9-46b7-8ae1-b82c36588bbb" (UID: "9d8344c5-e0b9-46b7-8ae1-b82c36588bbb"). InnerVolumeSpecName "kube-api-access-2k66l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.625229 4706 scope.go:117] "RemoveContainer" containerID="c1a79ce2a1418a38773a2307b33402cfef47a2d242eeb27a7e8b9031c3f513e1" Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.643934 4706 scope.go:117] "RemoveContainer" containerID="7e753458a354064d4321f779a4c719d02f5cdf8aba2fba124bc94ef471d9bf30" Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.660092 4706 scope.go:117] "RemoveContainer" containerID="f98c06c0f2c2288bf8b1d01b56ce5b6dc1e85b1f0f8a30d32e8604449d58cc89" Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.674779 4706 scope.go:117] "RemoveContainer" containerID="40923999beaa55882f0fd504956e18153412e5fcef0004bbb60e420b52bee565" Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.687615 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d8344c5-e0b9-46b7-8ae1-b82c36588bbb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d8344c5-e0b9-46b7-8ae1-b82c36588bbb" (UID: "9d8344c5-e0b9-46b7-8ae1-b82c36588bbb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.688460 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d8344c5-e0b9-46b7-8ae1-b82c36588bbb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.688494 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d8344c5-e0b9-46b7-8ae1-b82c36588bbb-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.688507 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2k66l\" (UniqueName: \"kubernetes.io/projected/9d8344c5-e0b9-46b7-8ae1-b82c36588bbb-kube-api-access-2k66l\") on node \"crc\" DevicePath \"\"" Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.695431 4706 scope.go:117] "RemoveContainer" containerID="7e753458a354064d4321f779a4c719d02f5cdf8aba2fba124bc94ef471d9bf30" Nov 25 11:40:48 crc kubenswrapper[4706]: E1125 11:40:48.696222 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e753458a354064d4321f779a4c719d02f5cdf8aba2fba124bc94ef471d9bf30\": container with ID starting with 7e753458a354064d4321f779a4c719d02f5cdf8aba2fba124bc94ef471d9bf30 not found: ID does not exist" containerID="7e753458a354064d4321f779a4c719d02f5cdf8aba2fba124bc94ef471d9bf30" Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.696466 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e753458a354064d4321f779a4c719d02f5cdf8aba2fba124bc94ef471d9bf30"} err="failed to get container status \"7e753458a354064d4321f779a4c719d02f5cdf8aba2fba124bc94ef471d9bf30\": rpc error: code = NotFound desc = could not find container \"7e753458a354064d4321f779a4c719d02f5cdf8aba2fba124bc94ef471d9bf30\": container with ID starting with 7e753458a354064d4321f779a4c719d02f5cdf8aba2fba124bc94ef471d9bf30 not found: ID does not exist" Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.696540 4706 scope.go:117] "RemoveContainer" containerID="f98c06c0f2c2288bf8b1d01b56ce5b6dc1e85b1f0f8a30d32e8604449d58cc89" Nov 25 11:40:48 crc kubenswrapper[4706]: E1125 11:40:48.697271 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f98c06c0f2c2288bf8b1d01b56ce5b6dc1e85b1f0f8a30d32e8604449d58cc89\": container with ID starting with f98c06c0f2c2288bf8b1d01b56ce5b6dc1e85b1f0f8a30d32e8604449d58cc89 not found: ID does not exist" containerID="f98c06c0f2c2288bf8b1d01b56ce5b6dc1e85b1f0f8a30d32e8604449d58cc89" Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.697333 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f98c06c0f2c2288bf8b1d01b56ce5b6dc1e85b1f0f8a30d32e8604449d58cc89"} err="failed to get container status \"f98c06c0f2c2288bf8b1d01b56ce5b6dc1e85b1f0f8a30d32e8604449d58cc89\": rpc error: code = NotFound desc = could not find container \"f98c06c0f2c2288bf8b1d01b56ce5b6dc1e85b1f0f8a30d32e8604449d58cc89\": container with ID starting with f98c06c0f2c2288bf8b1d01b56ce5b6dc1e85b1f0f8a30d32e8604449d58cc89 not found: ID does not exist" Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.697373 4706 scope.go:117] "RemoveContainer" containerID="40923999beaa55882f0fd504956e18153412e5fcef0004bbb60e420b52bee565" Nov 25 11:40:48 crc kubenswrapper[4706]: E1125 11:40:48.697711 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40923999beaa55882f0fd504956e18153412e5fcef0004bbb60e420b52bee565\": container with ID starting with 40923999beaa55882f0fd504956e18153412e5fcef0004bbb60e420b52bee565 not found: ID does not exist" containerID="40923999beaa55882f0fd504956e18153412e5fcef0004bbb60e420b52bee565" Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.697748 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40923999beaa55882f0fd504956e18153412e5fcef0004bbb60e420b52bee565"} err="failed to get container status \"40923999beaa55882f0fd504956e18153412e5fcef0004bbb60e420b52bee565\": rpc error: code = NotFound desc = could not find container \"40923999beaa55882f0fd504956e18153412e5fcef0004bbb60e420b52bee565\": container with ID starting with 40923999beaa55882f0fd504956e18153412e5fcef0004bbb60e420b52bee565 not found: ID does not exist" Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.906100 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tchjq"] Nov 25 11:40:48 crc kubenswrapper[4706]: I1125 11:40:48.910759 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tchjq"] Nov 25 11:40:49 crc kubenswrapper[4706]: I1125 11:40:49.931579 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53b77c12-5969-4020-b040-f53ab95adaf3" path="/var/lib/kubelet/pods/53b77c12-5969-4020-b040-f53ab95adaf3/volumes" Nov 25 11:40:49 crc kubenswrapper[4706]: I1125 11:40:49.932437 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d8344c5-e0b9-46b7-8ae1-b82c36588bbb" path="/var/lib/kubelet/pods/9d8344c5-e0b9-46b7-8ae1-b82c36588bbb/volumes" Nov 25 11:40:57 crc kubenswrapper[4706]: I1125 11:40:57.422488 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" podUID="239de662-d89b-4e6e-a970-56811041192f" containerName="oauth-openshift" containerID="cri-o://40945b717e08512d258602a1271a882fb8523358c4730c45304ef511f37b7dcb" gracePeriod=15 Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.633150 4706 generic.go:334] "Generic (PLEG): container finished" podID="239de662-d89b-4e6e-a970-56811041192f" containerID="40945b717e08512d258602a1271a882fb8523358c4730c45304ef511f37b7dcb" exitCode=0 Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.633245 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" event={"ID":"239de662-d89b-4e6e-a970-56811041192f","Type":"ContainerDied","Data":"40945b717e08512d258602a1271a882fb8523358c4730c45304ef511f37b7dcb"} Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.671150 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.715174 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7bfdc754df-fw48t"] Nov 25 11:40:58 crc kubenswrapper[4706]: E1125 11:40:58.715493 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c15a3609-095e-4cd9-ac60-1333da5a7f45" containerName="extract-content" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.715514 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="c15a3609-095e-4cd9-ac60-1333da5a7f45" containerName="extract-content" Nov 25 11:40:58 crc kubenswrapper[4706]: E1125 11:40:58.715537 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c15a3609-095e-4cd9-ac60-1333da5a7f45" containerName="registry-server" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.715548 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="c15a3609-095e-4cd9-ac60-1333da5a7f45" containerName="registry-server" Nov 25 11:40:58 crc kubenswrapper[4706]: E1125 11:40:58.715561 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6503703-bea5-49eb-84df-72a3fc483cfb" containerName="pruner" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.715571 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6503703-bea5-49eb-84df-72a3fc483cfb" containerName="pruner" Nov 25 11:40:58 crc kubenswrapper[4706]: E1125 11:40:58.715592 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59c181cc-6505-4d92-ab04-eaaa72b4389c" containerName="extract-content" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.715602 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="59c181cc-6505-4d92-ab04-eaaa72b4389c" containerName="extract-content" Nov 25 11:40:58 crc kubenswrapper[4706]: E1125 11:40:58.715616 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c15a3609-095e-4cd9-ac60-1333da5a7f45" containerName="extract-utilities" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.715626 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="c15a3609-095e-4cd9-ac60-1333da5a7f45" containerName="extract-utilities" Nov 25 11:40:58 crc kubenswrapper[4706]: E1125 11:40:58.715638 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d8344c5-e0b9-46b7-8ae1-b82c36588bbb" containerName="extract-content" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.715645 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d8344c5-e0b9-46b7-8ae1-b82c36588bbb" containerName="extract-content" Nov 25 11:40:58 crc kubenswrapper[4706]: E1125 11:40:58.715656 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53b77c12-5969-4020-b040-f53ab95adaf3" containerName="extract-content" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.715664 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="53b77c12-5969-4020-b040-f53ab95adaf3" containerName="extract-content" Nov 25 11:40:58 crc kubenswrapper[4706]: E1125 11:40:58.715676 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59c181cc-6505-4d92-ab04-eaaa72b4389c" containerName="extract-utilities" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.715684 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="59c181cc-6505-4d92-ab04-eaaa72b4389c" containerName="extract-utilities" Nov 25 11:40:58 crc kubenswrapper[4706]: E1125 11:40:58.715695 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59c181cc-6505-4d92-ab04-eaaa72b4389c" containerName="registry-server" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.715702 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="59c181cc-6505-4d92-ab04-eaaa72b4389c" containerName="registry-server" Nov 25 11:40:58 crc kubenswrapper[4706]: E1125 11:40:58.715714 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d8344c5-e0b9-46b7-8ae1-b82c36588bbb" containerName="extract-utilities" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.715721 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d8344c5-e0b9-46b7-8ae1-b82c36588bbb" containerName="extract-utilities" Nov 25 11:40:58 crc kubenswrapper[4706]: E1125 11:40:58.715732 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53b77c12-5969-4020-b040-f53ab95adaf3" containerName="registry-server" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.715740 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="53b77c12-5969-4020-b040-f53ab95adaf3" containerName="registry-server" Nov 25 11:40:58 crc kubenswrapper[4706]: E1125 11:40:58.715750 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c134187c-5e1c-4da1-be12-e5273da1b5f3" containerName="pruner" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.715757 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="c134187c-5e1c-4da1-be12-e5273da1b5f3" containerName="pruner" Nov 25 11:40:58 crc kubenswrapper[4706]: E1125 11:40:58.715766 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="239de662-d89b-4e6e-a970-56811041192f" containerName="oauth-openshift" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.715775 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="239de662-d89b-4e6e-a970-56811041192f" containerName="oauth-openshift" Nov 25 11:40:58 crc kubenswrapper[4706]: E1125 11:40:58.715787 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53b77c12-5969-4020-b040-f53ab95adaf3" containerName="extract-utilities" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.715795 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="53b77c12-5969-4020-b040-f53ab95adaf3" containerName="extract-utilities" Nov 25 11:40:58 crc kubenswrapper[4706]: E1125 11:40:58.715806 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d8344c5-e0b9-46b7-8ae1-b82c36588bbb" containerName="registry-server" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.715813 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d8344c5-e0b9-46b7-8ae1-b82c36588bbb" containerName="registry-server" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.715930 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6503703-bea5-49eb-84df-72a3fc483cfb" containerName="pruner" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.715946 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="c134187c-5e1c-4da1-be12-e5273da1b5f3" containerName="pruner" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.715957 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="c15a3609-095e-4cd9-ac60-1333da5a7f45" containerName="registry-server" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.715970 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d8344c5-e0b9-46b7-8ae1-b82c36588bbb" containerName="registry-server" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.715981 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="59c181cc-6505-4d92-ab04-eaaa72b4389c" containerName="registry-server" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.715988 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="239de662-d89b-4e6e-a970-56811041192f" containerName="oauth-openshift" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.715997 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="53b77c12-5969-4020-b040-f53ab95adaf3" containerName="registry-server" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.716538 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.731221 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7bfdc754df-fw48t"] Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.830276 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-user-idp-0-file-data\") pod \"239de662-d89b-4e6e-a970-56811041192f\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.830385 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-user-template-provider-selection\") pod \"239de662-d89b-4e6e-a970-56811041192f\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.830417 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcv28\" (UniqueName: \"kubernetes.io/projected/239de662-d89b-4e6e-a970-56811041192f-kube-api-access-dcv28\") pod \"239de662-d89b-4e6e-a970-56811041192f\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.830435 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-trusted-ca-bundle\") pod \"239de662-d89b-4e6e-a970-56811041192f\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.830456 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-router-certs\") pod \"239de662-d89b-4e6e-a970-56811041192f\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.830479 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-serving-cert\") pod \"239de662-d89b-4e6e-a970-56811041192f\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.830494 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/239de662-d89b-4e6e-a970-56811041192f-audit-policies\") pod \"239de662-d89b-4e6e-a970-56811041192f\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.830534 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-user-template-login\") pod \"239de662-d89b-4e6e-a970-56811041192f\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.830561 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-user-template-error\") pod \"239de662-d89b-4e6e-a970-56811041192f\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.830586 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-service-ca\") pod \"239de662-d89b-4e6e-a970-56811041192f\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.830618 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-cliconfig\") pod \"239de662-d89b-4e6e-a970-56811041192f\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.830641 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-session\") pod \"239de662-d89b-4e6e-a970-56811041192f\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.830671 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-ocp-branding-template\") pod \"239de662-d89b-4e6e-a970-56811041192f\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.830711 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/239de662-d89b-4e6e-a970-56811041192f-audit-dir\") pod \"239de662-d89b-4e6e-a970-56811041192f\" (UID: \"239de662-d89b-4e6e-a970-56811041192f\") " Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.830886 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9574798-c4b8-4a78-ba9c-df9be9b4005b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.830929 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a9574798-c4b8-4a78-ba9c-df9be9b4005b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.830953 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a9574798-c4b8-4a78-ba9c-df9be9b4005b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.830982 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a9574798-c4b8-4a78-ba9c-df9be9b4005b-v4-0-config-system-service-ca\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.830997 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a9574798-c4b8-4a78-ba9c-df9be9b4005b-v4-0-config-user-template-error\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.831017 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a9574798-c4b8-4a78-ba9c-df9be9b4005b-audit-dir\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.831046 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a9574798-c4b8-4a78-ba9c-df9be9b4005b-v4-0-config-system-router-certs\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.831067 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9574798-c4b8-4a78-ba9c-df9be9b4005b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.831090 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a9574798-c4b8-4a78-ba9c-df9be9b4005b-audit-policies\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.831106 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a9574798-c4b8-4a78-ba9c-df9be9b4005b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.831127 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a9574798-c4b8-4a78-ba9c-df9be9b4005b-v4-0-config-system-session\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.831149 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwgrg\" (UniqueName: \"kubernetes.io/projected/a9574798-c4b8-4a78-ba9c-df9be9b4005b-kube-api-access-dwgrg\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.831169 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a9574798-c4b8-4a78-ba9c-df9be9b4005b-v4-0-config-user-template-login\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.831186 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a9574798-c4b8-4a78-ba9c-df9be9b4005b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.833063 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "239de662-d89b-4e6e-a970-56811041192f" (UID: "239de662-d89b-4e6e-a970-56811041192f"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.836126 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/239de662-d89b-4e6e-a970-56811041192f-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "239de662-d89b-4e6e-a970-56811041192f" (UID: "239de662-d89b-4e6e-a970-56811041192f"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.836703 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "239de662-d89b-4e6e-a970-56811041192f" (UID: "239de662-d89b-4e6e-a970-56811041192f"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.837073 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/239de662-d89b-4e6e-a970-56811041192f-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "239de662-d89b-4e6e-a970-56811041192f" (UID: "239de662-d89b-4e6e-a970-56811041192f"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.837089 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "239de662-d89b-4e6e-a970-56811041192f" (UID: "239de662-d89b-4e6e-a970-56811041192f"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.837944 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "239de662-d89b-4e6e-a970-56811041192f" (UID: "239de662-d89b-4e6e-a970-56811041192f"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.838897 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "239de662-d89b-4e6e-a970-56811041192f" (UID: "239de662-d89b-4e6e-a970-56811041192f"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.839445 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "239de662-d89b-4e6e-a970-56811041192f" (UID: "239de662-d89b-4e6e-a970-56811041192f"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.840423 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/239de662-d89b-4e6e-a970-56811041192f-kube-api-access-dcv28" (OuterVolumeSpecName: "kube-api-access-dcv28") pod "239de662-d89b-4e6e-a970-56811041192f" (UID: "239de662-d89b-4e6e-a970-56811041192f"). InnerVolumeSpecName "kube-api-access-dcv28". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.850716 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "239de662-d89b-4e6e-a970-56811041192f" (UID: "239de662-d89b-4e6e-a970-56811041192f"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.851517 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "239de662-d89b-4e6e-a970-56811041192f" (UID: "239de662-d89b-4e6e-a970-56811041192f"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.852179 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "239de662-d89b-4e6e-a970-56811041192f" (UID: "239de662-d89b-4e6e-a970-56811041192f"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.853405 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "239de662-d89b-4e6e-a970-56811041192f" (UID: "239de662-d89b-4e6e-a970-56811041192f"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.859240 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "239de662-d89b-4e6e-a970-56811041192f" (UID: "239de662-d89b-4e6e-a970-56811041192f"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.932371 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a9574798-c4b8-4a78-ba9c-df9be9b4005b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.932434 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a9574798-c4b8-4a78-ba9c-df9be9b4005b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.932461 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a9574798-c4b8-4a78-ba9c-df9be9b4005b-v4-0-config-system-service-ca\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.932478 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a9574798-c4b8-4a78-ba9c-df9be9b4005b-v4-0-config-user-template-error\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.932504 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a9574798-c4b8-4a78-ba9c-df9be9b4005b-audit-dir\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.932524 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a9574798-c4b8-4a78-ba9c-df9be9b4005b-v4-0-config-system-router-certs\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.932545 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9574798-c4b8-4a78-ba9c-df9be9b4005b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.932575 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a9574798-c4b8-4a78-ba9c-df9be9b4005b-audit-policies\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.932606 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a9574798-c4b8-4a78-ba9c-df9be9b4005b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.932641 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a9574798-c4b8-4a78-ba9c-df9be9b4005b-v4-0-config-system-session\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.932663 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwgrg\" (UniqueName: \"kubernetes.io/projected/a9574798-c4b8-4a78-ba9c-df9be9b4005b-kube-api-access-dwgrg\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.932687 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a9574798-c4b8-4a78-ba9c-df9be9b4005b-v4-0-config-user-template-login\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.932705 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a9574798-c4b8-4a78-ba9c-df9be9b4005b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.932734 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9574798-c4b8-4a78-ba9c-df9be9b4005b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.932783 4706 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.932794 4706 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.932804 4706 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.932814 4706 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.932823 4706 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.932833 4706 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.932845 4706 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/239de662-d89b-4e6e-a970-56811041192f-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.932854 4706 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.932865 4706 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.932874 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcv28\" (UniqueName: \"kubernetes.io/projected/239de662-d89b-4e6e-a970-56811041192f-kube-api-access-dcv28\") on node \"crc\" DevicePath \"\"" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.932883 4706 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.932891 4706 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.932900 4706 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/239de662-d89b-4e6e-a970-56811041192f-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.932908 4706 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/239de662-d89b-4e6e-a970-56811041192f-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.933653 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a9574798-c4b8-4a78-ba9c-df9be9b4005b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.933959 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9574798-c4b8-4a78-ba9c-df9be9b4005b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.934338 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a9574798-c4b8-4a78-ba9c-df9be9b4005b-audit-dir\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.934589 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a9574798-c4b8-4a78-ba9c-df9be9b4005b-audit-policies\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.935215 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a9574798-c4b8-4a78-ba9c-df9be9b4005b-v4-0-config-system-service-ca\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.943993 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a9574798-c4b8-4a78-ba9c-df9be9b4005b-v4-0-config-user-template-error\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.944044 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9574798-c4b8-4a78-ba9c-df9be9b4005b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.946913 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a9574798-c4b8-4a78-ba9c-df9be9b4005b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.948977 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a9574798-c4b8-4a78-ba9c-df9be9b4005b-v4-0-config-user-template-login\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.949117 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a9574798-c4b8-4a78-ba9c-df9be9b4005b-v4-0-config-system-router-certs\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.949427 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a9574798-c4b8-4a78-ba9c-df9be9b4005b-v4-0-config-system-session\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.951813 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a9574798-c4b8-4a78-ba9c-df9be9b4005b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.954624 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a9574798-c4b8-4a78-ba9c-df9be9b4005b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:58 crc kubenswrapper[4706]: I1125 11:40:58.956845 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwgrg\" (UniqueName: \"kubernetes.io/projected/a9574798-c4b8-4a78-ba9c-df9be9b4005b-kube-api-access-dwgrg\") pod \"oauth-openshift-7bfdc754df-fw48t\" (UID: \"a9574798-c4b8-4a78-ba9c-df9be9b4005b\") " pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:59 crc kubenswrapper[4706]: I1125 11:40:59.036271 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:59 crc kubenswrapper[4706]: I1125 11:40:59.309877 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7bfdc754df-fw48t"] Nov 25 11:40:59 crc kubenswrapper[4706]: I1125 11:40:59.643871 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" event={"ID":"239de662-d89b-4e6e-a970-56811041192f","Type":"ContainerDied","Data":"daae90bb32680c0749960f3221bae7ee27ccf0dfdb8f8980f85c5620d83c1d00"} Nov 25 11:40:59 crc kubenswrapper[4706]: I1125 11:40:59.643898 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ss2xd" Nov 25 11:40:59 crc kubenswrapper[4706]: I1125 11:40:59.644476 4706 scope.go:117] "RemoveContainer" containerID="40945b717e08512d258602a1271a882fb8523358c4730c45304ef511f37b7dcb" Nov 25 11:40:59 crc kubenswrapper[4706]: I1125 11:40:59.653914 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" event={"ID":"a9574798-c4b8-4a78-ba9c-df9be9b4005b","Type":"ContainerStarted","Data":"ac3a69daa9381ceb5fe830df4f50fa9852e7ee96440c39649dd2e826af9158ba"} Nov 25 11:40:59 crc kubenswrapper[4706]: I1125 11:40:59.653999 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" event={"ID":"a9574798-c4b8-4a78-ba9c-df9be9b4005b","Type":"ContainerStarted","Data":"7d197259fefab63af0cd6c9ceaca7c3f0585a4889591d0c4155527b7d890f89e"} Nov 25 11:40:59 crc kubenswrapper[4706]: I1125 11:40:59.654440 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:40:59 crc kubenswrapper[4706]: I1125 11:40:59.657249 4706 patch_prober.go:28] interesting pod/oauth-openshift-7bfdc754df-fw48t container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.54:6443/healthz\": dial tcp 10.217.0.54:6443: connect: connection refused" start-of-body= Nov 25 11:40:59 crc kubenswrapper[4706]: I1125 11:40:59.657373 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" podUID="a9574798-c4b8-4a78-ba9c-df9be9b4005b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.54:6443/healthz\": dial tcp 10.217.0.54:6443: connect: connection refused" Nov 25 11:40:59 crc kubenswrapper[4706]: I1125 11:40:59.710599 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" podStartSLOduration=27.710575767 podStartE2EDuration="27.710575767s" podCreationTimestamp="2025-11-25 11:40:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:40:59.706812961 +0000 UTC m=+268.621370342" watchObservedRunningTime="2025-11-25 11:40:59.710575767 +0000 UTC m=+268.625133148" Nov 25 11:40:59 crc kubenswrapper[4706]: I1125 11:40:59.723362 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ss2xd"] Nov 25 11:40:59 crc kubenswrapper[4706]: I1125 11:40:59.727966 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ss2xd"] Nov 25 11:40:59 crc kubenswrapper[4706]: I1125 11:40:59.930998 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="239de662-d89b-4e6e-a970-56811041192f" path="/var/lib/kubelet/pods/239de662-d89b-4e6e-a970-56811041192f/volumes" Nov 25 11:41:00 crc kubenswrapper[4706]: I1125 11:41:00.667782 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7bfdc754df-fw48t" Nov 25 11:41:19 crc kubenswrapper[4706]: I1125 11:41:19.840315 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h8tj2"] Nov 25 11:41:19 crc kubenswrapper[4706]: I1125 11:41:19.841377 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h8tj2" podUID="e636fb64-6a73-4a3d-84d3-d933046a68e0" containerName="registry-server" containerID="cri-o://0c2ca8bb53141a7272695b9963d4aea3ea3329aa2f7b6ab873904a25d0211997" gracePeriod=30 Nov 25 11:41:19 crc kubenswrapper[4706]: I1125 11:41:19.856995 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mlg4m"] Nov 25 11:41:19 crc kubenswrapper[4706]: I1125 11:41:19.857410 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mlg4m" podUID="efdf993e-c4c2-4eff-877d-03df2af9d43c" containerName="registry-server" containerID="cri-o://7da57e8e131a4bc2ca553fae2ec9034b55706ee63e4b9975717ee1758a3beca1" gracePeriod=30 Nov 25 11:41:19 crc kubenswrapper[4706]: I1125 11:41:19.873445 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zn9dk"] Nov 25 11:41:19 crc kubenswrapper[4706]: I1125 11:41:19.873768 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-zn9dk" podUID="bd8d3bba-bf4e-4bda-94ff-ce2902b3299a" containerName="marketplace-operator" containerID="cri-o://e1d472d4907ff5bc21dee43ddf20267a8593cd34b3567fa36c0d083869575729" gracePeriod=30 Nov 25 11:41:19 crc kubenswrapper[4706]: I1125 11:41:19.887066 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jx6l5"] Nov 25 11:41:19 crc kubenswrapper[4706]: I1125 11:41:19.887540 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jx6l5" podUID="9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05" containerName="registry-server" containerID="cri-o://3d0d6b37b1f6286c17cbde7d73aedbae98a877212a8a9f7323b0cb51be3f88df" gracePeriod=30 Nov 25 11:41:19 crc kubenswrapper[4706]: I1125 11:41:19.890282 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qb6fx"] Nov 25 11:41:19 crc kubenswrapper[4706]: I1125 11:41:19.894507 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vnd8s"] Nov 25 11:41:19 crc kubenswrapper[4706]: I1125 11:41:19.895218 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qb6fx" podUID="815eca00-0648-4421-8b14-0eb14056161b" containerName="registry-server" containerID="cri-o://ffbc72cbf8c7c250bb4c30e3ede421c474934e8e926882dc57dc32473807d031" gracePeriod=30 Nov 25 11:41:19 crc kubenswrapper[4706]: I1125 11:41:19.895526 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vnd8s" Nov 25 11:41:19 crc kubenswrapper[4706]: I1125 11:41:19.904402 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vnd8s"] Nov 25 11:41:19 crc kubenswrapper[4706]: I1125 11:41:19.938655 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57792378-6c0b-415c-aeb2-4cbb2c3c1702-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vnd8s\" (UID: \"57792378-6c0b-415c-aeb2-4cbb2c3c1702\") " pod="openshift-marketplace/marketplace-operator-79b997595-vnd8s" Nov 25 11:41:19 crc kubenswrapper[4706]: I1125 11:41:19.938737 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p297\" (UniqueName: \"kubernetes.io/projected/57792378-6c0b-415c-aeb2-4cbb2c3c1702-kube-api-access-4p297\") pod \"marketplace-operator-79b997595-vnd8s\" (UID: \"57792378-6c0b-415c-aeb2-4cbb2c3c1702\") " pod="openshift-marketplace/marketplace-operator-79b997595-vnd8s" Nov 25 11:41:19 crc kubenswrapper[4706]: I1125 11:41:19.938783 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/57792378-6c0b-415c-aeb2-4cbb2c3c1702-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vnd8s\" (UID: \"57792378-6c0b-415c-aeb2-4cbb2c3c1702\") " pod="openshift-marketplace/marketplace-operator-79b997595-vnd8s" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.039561 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57792378-6c0b-415c-aeb2-4cbb2c3c1702-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vnd8s\" (UID: \"57792378-6c0b-415c-aeb2-4cbb2c3c1702\") " pod="openshift-marketplace/marketplace-operator-79b997595-vnd8s" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.039637 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p297\" (UniqueName: \"kubernetes.io/projected/57792378-6c0b-415c-aeb2-4cbb2c3c1702-kube-api-access-4p297\") pod \"marketplace-operator-79b997595-vnd8s\" (UID: \"57792378-6c0b-415c-aeb2-4cbb2c3c1702\") " pod="openshift-marketplace/marketplace-operator-79b997595-vnd8s" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.039675 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/57792378-6c0b-415c-aeb2-4cbb2c3c1702-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vnd8s\" (UID: \"57792378-6c0b-415c-aeb2-4cbb2c3c1702\") " pod="openshift-marketplace/marketplace-operator-79b997595-vnd8s" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.041719 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57792378-6c0b-415c-aeb2-4cbb2c3c1702-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vnd8s\" (UID: \"57792378-6c0b-415c-aeb2-4cbb2c3c1702\") " pod="openshift-marketplace/marketplace-operator-79b997595-vnd8s" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.048955 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/57792378-6c0b-415c-aeb2-4cbb2c3c1702-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vnd8s\" (UID: \"57792378-6c0b-415c-aeb2-4cbb2c3c1702\") " pod="openshift-marketplace/marketplace-operator-79b997595-vnd8s" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.059611 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p297\" (UniqueName: \"kubernetes.io/projected/57792378-6c0b-415c-aeb2-4cbb2c3c1702-kube-api-access-4p297\") pod \"marketplace-operator-79b997595-vnd8s\" (UID: \"57792378-6c0b-415c-aeb2-4cbb2c3c1702\") " pod="openshift-marketplace/marketplace-operator-79b997595-vnd8s" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.227972 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vnd8s" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.449852 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vnd8s"] Nov 25 11:41:20 crc kubenswrapper[4706]: W1125 11:41:20.457239 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57792378_6c0b_415c_aeb2_4cbb2c3c1702.slice/crio-5ae037392ee89149d4c135732dcb12bf993eda917c88fe777eae74687dff0bd9 WatchSource:0}: Error finding container 5ae037392ee89149d4c135732dcb12bf993eda917c88fe777eae74687dff0bd9: Status 404 returned error can't find the container with id 5ae037392ee89149d4c135732dcb12bf993eda917c88fe777eae74687dff0bd9 Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.699973 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h8tj2" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.758138 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9bzh\" (UniqueName: \"kubernetes.io/projected/e636fb64-6a73-4a3d-84d3-d933046a68e0-kube-api-access-v9bzh\") pod \"e636fb64-6a73-4a3d-84d3-d933046a68e0\" (UID: \"e636fb64-6a73-4a3d-84d3-d933046a68e0\") " Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.758436 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e636fb64-6a73-4a3d-84d3-d933046a68e0-utilities\") pod \"e636fb64-6a73-4a3d-84d3-d933046a68e0\" (UID: \"e636fb64-6a73-4a3d-84d3-d933046a68e0\") " Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.758501 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e636fb64-6a73-4a3d-84d3-d933046a68e0-catalog-content\") pod \"e636fb64-6a73-4a3d-84d3-d933046a68e0\" (UID: \"e636fb64-6a73-4a3d-84d3-d933046a68e0\") " Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.759937 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e636fb64-6a73-4a3d-84d3-d933046a68e0-utilities" (OuterVolumeSpecName: "utilities") pod "e636fb64-6a73-4a3d-84d3-d933046a68e0" (UID: "e636fb64-6a73-4a3d-84d3-d933046a68e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.763765 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e636fb64-6a73-4a3d-84d3-d933046a68e0-kube-api-access-v9bzh" (OuterVolumeSpecName: "kube-api-access-v9bzh") pod "e636fb64-6a73-4a3d-84d3-d933046a68e0" (UID: "e636fb64-6a73-4a3d-84d3-d933046a68e0"). InnerVolumeSpecName "kube-api-access-v9bzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.779033 4706 generic.go:334] "Generic (PLEG): container finished" podID="bd8d3bba-bf4e-4bda-94ff-ce2902b3299a" containerID="e1d472d4907ff5bc21dee43ddf20267a8593cd34b3567fa36c0d083869575729" exitCode=0 Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.779226 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zn9dk" event={"ID":"bd8d3bba-bf4e-4bda-94ff-ce2902b3299a","Type":"ContainerDied","Data":"e1d472d4907ff5bc21dee43ddf20267a8593cd34b3567fa36c0d083869575729"} Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.783898 4706 generic.go:334] "Generic (PLEG): container finished" podID="efdf993e-c4c2-4eff-877d-03df2af9d43c" containerID="7da57e8e131a4bc2ca553fae2ec9034b55706ee63e4b9975717ee1758a3beca1" exitCode=0 Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.784081 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mlg4m" event={"ID":"efdf993e-c4c2-4eff-877d-03df2af9d43c","Type":"ContainerDied","Data":"7da57e8e131a4bc2ca553fae2ec9034b55706ee63e4b9975717ee1758a3beca1"} Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.786443 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zn9dk" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.789526 4706 generic.go:334] "Generic (PLEG): container finished" podID="e636fb64-6a73-4a3d-84d3-d933046a68e0" containerID="0c2ca8bb53141a7272695b9963d4aea3ea3329aa2f7b6ab873904a25d0211997" exitCode=0 Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.789583 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8tj2" event={"ID":"e636fb64-6a73-4a3d-84d3-d933046a68e0","Type":"ContainerDied","Data":"0c2ca8bb53141a7272695b9963d4aea3ea3329aa2f7b6ab873904a25d0211997"} Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.790343 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8tj2" event={"ID":"e636fb64-6a73-4a3d-84d3-d933046a68e0","Type":"ContainerDied","Data":"2550fdcb1b25857124bf5bc2b13b18a76b7679e44616244a0ed5c1d3a1aefdf1"} Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.790386 4706 scope.go:117] "RemoveContainer" containerID="0c2ca8bb53141a7272695b9963d4aea3ea3329aa2f7b6ab873904a25d0211997" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.790040 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h8tj2" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.798599 4706 generic.go:334] "Generic (PLEG): container finished" podID="815eca00-0648-4421-8b14-0eb14056161b" containerID="ffbc72cbf8c7c250bb4c30e3ede421c474934e8e926882dc57dc32473807d031" exitCode=0 Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.798713 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qb6fx" event={"ID":"815eca00-0648-4421-8b14-0eb14056161b","Type":"ContainerDied","Data":"ffbc72cbf8c7c250bb4c30e3ede421c474934e8e926882dc57dc32473807d031"} Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.798802 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mlg4m" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.800735 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vnd8s" event={"ID":"57792378-6c0b-415c-aeb2-4cbb2c3c1702","Type":"ContainerStarted","Data":"cb4ce7fa14a007a7feaabd8d6235c2af8c200dea0c6747455b285424282951ca"} Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.800766 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vnd8s" event={"ID":"57792378-6c0b-415c-aeb2-4cbb2c3c1702","Type":"ContainerStarted","Data":"5ae037392ee89149d4c135732dcb12bf993eda917c88fe777eae74687dff0bd9"} Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.802327 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vnd8s" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.806525 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jx6l5" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.811421 4706 generic.go:334] "Generic (PLEG): container finished" podID="9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05" containerID="3d0d6b37b1f6286c17cbde7d73aedbae98a877212a8a9f7323b0cb51be3f88df" exitCode=0 Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.811471 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jx6l5" event={"ID":"9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05","Type":"ContainerDied","Data":"3d0d6b37b1f6286c17cbde7d73aedbae98a877212a8a9f7323b0cb51be3f88df"} Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.813246 4706 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vnd8s container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.55:8080/healthz\": dial tcp 10.217.0.55:8080: connect: connection refused" start-of-body= Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.813293 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vnd8s" podUID="57792378-6c0b-415c-aeb2-4cbb2c3c1702" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.55:8080/healthz\": dial tcp 10.217.0.55:8080: connect: connection refused" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.834466 4706 scope.go:117] "RemoveContainer" containerID="69ff74230ad41cff40ec5b7cf0e47f2b7a058935276609882c7535bfbd09f273" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.856624 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vnd8s" podStartSLOduration=1.8566039810000001 podStartE2EDuration="1.856603981s" podCreationTimestamp="2025-11-25 11:41:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:41:20.846576679 +0000 UTC m=+289.761134070" watchObservedRunningTime="2025-11-25 11:41:20.856603981 +0000 UTC m=+289.771161362" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.862387 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05-utilities\") pod \"9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05\" (UID: \"9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05\") " Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.862616 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bd8d3bba-bf4e-4bda-94ff-ce2902b3299a-marketplace-operator-metrics\") pod \"bd8d3bba-bf4e-4bda-94ff-ce2902b3299a\" (UID: \"bd8d3bba-bf4e-4bda-94ff-ce2902b3299a\") " Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.863378 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd8d3bba-bf4e-4bda-94ff-ce2902b3299a-marketplace-trusted-ca\") pod \"bd8d3bba-bf4e-4bda-94ff-ce2902b3299a\" (UID: \"bd8d3bba-bf4e-4bda-94ff-ce2902b3299a\") " Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.863422 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcwc6\" (UniqueName: \"kubernetes.io/projected/bd8d3bba-bf4e-4bda-94ff-ce2902b3299a-kube-api-access-kcwc6\") pod \"bd8d3bba-bf4e-4bda-94ff-ce2902b3299a\" (UID: \"bd8d3bba-bf4e-4bda-94ff-ce2902b3299a\") " Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.864220 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05-catalog-content\") pod \"9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05\" (UID: \"9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05\") " Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.864257 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efdf993e-c4c2-4eff-877d-03df2af9d43c-utilities\") pod \"efdf993e-c4c2-4eff-877d-03df2af9d43c\" (UID: \"efdf993e-c4c2-4eff-877d-03df2af9d43c\") " Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.864542 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr9tf\" (UniqueName: \"kubernetes.io/projected/9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05-kube-api-access-vr9tf\") pod \"9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05\" (UID: \"9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05\") " Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.864787 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8g24\" (UniqueName: \"kubernetes.io/projected/efdf993e-c4c2-4eff-877d-03df2af9d43c-kube-api-access-f8g24\") pod \"efdf993e-c4c2-4eff-877d-03df2af9d43c\" (UID: \"efdf993e-c4c2-4eff-877d-03df2af9d43c\") " Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.865025 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efdf993e-c4c2-4eff-877d-03df2af9d43c-catalog-content\") pod \"efdf993e-c4c2-4eff-877d-03df2af9d43c\" (UID: \"efdf993e-c4c2-4eff-877d-03df2af9d43c\") " Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.865060 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e636fb64-6a73-4a3d-84d3-d933046a68e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e636fb64-6a73-4a3d-84d3-d933046a68e0" (UID: "e636fb64-6a73-4a3d-84d3-d933046a68e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.865511 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd8d3bba-bf4e-4bda-94ff-ce2902b3299a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "bd8d3bba-bf4e-4bda-94ff-ce2902b3299a" (UID: "bd8d3bba-bf4e-4bda-94ff-ce2902b3299a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.866557 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efdf993e-c4c2-4eff-877d-03df2af9d43c-utilities" (OuterVolumeSpecName: "utilities") pod "efdf993e-c4c2-4eff-877d-03df2af9d43c" (UID: "efdf993e-c4c2-4eff-877d-03df2af9d43c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.866629 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e636fb64-6a73-4a3d-84d3-d933046a68e0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.867609 4706 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd8d3bba-bf4e-4bda-94ff-ce2902b3299a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.867638 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9bzh\" (UniqueName: \"kubernetes.io/projected/e636fb64-6a73-4a3d-84d3-d933046a68e0-kube-api-access-v9bzh\") on node \"crc\" DevicePath \"\"" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.868385 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e636fb64-6a73-4a3d-84d3-d933046a68e0-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.869070 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efdf993e-c4c2-4eff-877d-03df2af9d43c-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.868019 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qb6fx" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.867635 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05-utilities" (OuterVolumeSpecName: "utilities") pod "9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05" (UID: "9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.871415 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd8d3bba-bf4e-4bda-94ff-ce2902b3299a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "bd8d3bba-bf4e-4bda-94ff-ce2902b3299a" (UID: "bd8d3bba-bf4e-4bda-94ff-ce2902b3299a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.874209 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05-kube-api-access-vr9tf" (OuterVolumeSpecName: "kube-api-access-vr9tf") pod "9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05" (UID: "9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05"). InnerVolumeSpecName "kube-api-access-vr9tf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.877520 4706 scope.go:117] "RemoveContainer" containerID="081d1be1ebca978535c05824cf0d9f66230b878a5df3d54b53d44c7756beec9d" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.878162 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd8d3bba-bf4e-4bda-94ff-ce2902b3299a-kube-api-access-kcwc6" (OuterVolumeSpecName: "kube-api-access-kcwc6") pod "bd8d3bba-bf4e-4bda-94ff-ce2902b3299a" (UID: "bd8d3bba-bf4e-4bda-94ff-ce2902b3299a"). InnerVolumeSpecName "kube-api-access-kcwc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.882287 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdf993e-c4c2-4eff-877d-03df2af9d43c-kube-api-access-f8g24" (OuterVolumeSpecName: "kube-api-access-f8g24") pod "efdf993e-c4c2-4eff-877d-03df2af9d43c" (UID: "efdf993e-c4c2-4eff-877d-03df2af9d43c"). InnerVolumeSpecName "kube-api-access-f8g24". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.906343 4706 scope.go:117] "RemoveContainer" containerID="0c2ca8bb53141a7272695b9963d4aea3ea3329aa2f7b6ab873904a25d0211997" Nov 25 11:41:20 crc kubenswrapper[4706]: E1125 11:41:20.907135 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c2ca8bb53141a7272695b9963d4aea3ea3329aa2f7b6ab873904a25d0211997\": container with ID starting with 0c2ca8bb53141a7272695b9963d4aea3ea3329aa2f7b6ab873904a25d0211997 not found: ID does not exist" containerID="0c2ca8bb53141a7272695b9963d4aea3ea3329aa2f7b6ab873904a25d0211997" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.907170 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c2ca8bb53141a7272695b9963d4aea3ea3329aa2f7b6ab873904a25d0211997"} err="failed to get container status \"0c2ca8bb53141a7272695b9963d4aea3ea3329aa2f7b6ab873904a25d0211997\": rpc error: code = NotFound desc = could not find container \"0c2ca8bb53141a7272695b9963d4aea3ea3329aa2f7b6ab873904a25d0211997\": container with ID starting with 0c2ca8bb53141a7272695b9963d4aea3ea3329aa2f7b6ab873904a25d0211997 not found: ID does not exist" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.907209 4706 scope.go:117] "RemoveContainer" containerID="69ff74230ad41cff40ec5b7cf0e47f2b7a058935276609882c7535bfbd09f273" Nov 25 11:41:20 crc kubenswrapper[4706]: E1125 11:41:20.911088 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69ff74230ad41cff40ec5b7cf0e47f2b7a058935276609882c7535bfbd09f273\": container with ID starting with 69ff74230ad41cff40ec5b7cf0e47f2b7a058935276609882c7535bfbd09f273 not found: ID does not exist" containerID="69ff74230ad41cff40ec5b7cf0e47f2b7a058935276609882c7535bfbd09f273" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.911132 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69ff74230ad41cff40ec5b7cf0e47f2b7a058935276609882c7535bfbd09f273"} err="failed to get container status \"69ff74230ad41cff40ec5b7cf0e47f2b7a058935276609882c7535bfbd09f273\": rpc error: code = NotFound desc = could not find container \"69ff74230ad41cff40ec5b7cf0e47f2b7a058935276609882c7535bfbd09f273\": container with ID starting with 69ff74230ad41cff40ec5b7cf0e47f2b7a058935276609882c7535bfbd09f273 not found: ID does not exist" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.911165 4706 scope.go:117] "RemoveContainer" containerID="081d1be1ebca978535c05824cf0d9f66230b878a5df3d54b53d44c7756beec9d" Nov 25 11:41:20 crc kubenswrapper[4706]: E1125 11:41:20.911555 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"081d1be1ebca978535c05824cf0d9f66230b878a5df3d54b53d44c7756beec9d\": container with ID starting with 081d1be1ebca978535c05824cf0d9f66230b878a5df3d54b53d44c7756beec9d not found: ID does not exist" containerID="081d1be1ebca978535c05824cf0d9f66230b878a5df3d54b53d44c7756beec9d" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.911585 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"081d1be1ebca978535c05824cf0d9f66230b878a5df3d54b53d44c7756beec9d"} err="failed to get container status \"081d1be1ebca978535c05824cf0d9f66230b878a5df3d54b53d44c7756beec9d\": rpc error: code = NotFound desc = could not find container \"081d1be1ebca978535c05824cf0d9f66230b878a5df3d54b53d44c7756beec9d\": container with ID starting with 081d1be1ebca978535c05824cf0d9f66230b878a5df3d54b53d44c7756beec9d not found: ID does not exist" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.911601 4706 scope.go:117] "RemoveContainer" containerID="3d0d6b37b1f6286c17cbde7d73aedbae98a877212a8a9f7323b0cb51be3f88df" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.927234 4706 scope.go:117] "RemoveContainer" containerID="583951c291d09b4ed406d6dd4dfe30774f57214b98725ada6bf72913d2194118" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.931183 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05" (UID: "9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.941614 4706 scope.go:117] "RemoveContainer" containerID="49f3f8273b9ea886cbb6338982b4b332704503478980beb0dadbd6a23517f7d5" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.970076 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/815eca00-0648-4421-8b14-0eb14056161b-catalog-content\") pod \"815eca00-0648-4421-8b14-0eb14056161b\" (UID: \"815eca00-0648-4421-8b14-0eb14056161b\") " Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.970152 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/815eca00-0648-4421-8b14-0eb14056161b-utilities\") pod \"815eca00-0648-4421-8b14-0eb14056161b\" (UID: \"815eca00-0648-4421-8b14-0eb14056161b\") " Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.970222 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlkv9\" (UniqueName: \"kubernetes.io/projected/815eca00-0648-4421-8b14-0eb14056161b-kube-api-access-tlkv9\") pod \"815eca00-0648-4421-8b14-0eb14056161b\" (UID: \"815eca00-0648-4421-8b14-0eb14056161b\") " Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.970445 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.970456 4706 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bd8d3bba-bf4e-4bda-94ff-ce2902b3299a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.970466 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcwc6\" (UniqueName: \"kubernetes.io/projected/bd8d3bba-bf4e-4bda-94ff-ce2902b3299a-kube-api-access-kcwc6\") on node \"crc\" DevicePath \"\"" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.970475 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.970489 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr9tf\" (UniqueName: \"kubernetes.io/projected/9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05-kube-api-access-vr9tf\") on node \"crc\" DevicePath \"\"" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.970498 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8g24\" (UniqueName: \"kubernetes.io/projected/efdf993e-c4c2-4eff-877d-03df2af9d43c-kube-api-access-f8g24\") on node \"crc\" DevicePath \"\"" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.971093 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efdf993e-c4c2-4eff-877d-03df2af9d43c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "efdf993e-c4c2-4eff-877d-03df2af9d43c" (UID: "efdf993e-c4c2-4eff-877d-03df2af9d43c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.971919 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/815eca00-0648-4421-8b14-0eb14056161b-utilities" (OuterVolumeSpecName: "utilities") pod "815eca00-0648-4421-8b14-0eb14056161b" (UID: "815eca00-0648-4421-8b14-0eb14056161b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:41:20 crc kubenswrapper[4706]: I1125 11:41:20.973489 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/815eca00-0648-4421-8b14-0eb14056161b-kube-api-access-tlkv9" (OuterVolumeSpecName: "kube-api-access-tlkv9") pod "815eca00-0648-4421-8b14-0eb14056161b" (UID: "815eca00-0648-4421-8b14-0eb14056161b"). InnerVolumeSpecName "kube-api-access-tlkv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.073672 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/815eca00-0648-4421-8b14-0eb14056161b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "815eca00-0648-4421-8b14-0eb14056161b" (UID: "815eca00-0648-4421-8b14-0eb14056161b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.074524 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/815eca00-0648-4421-8b14-0eb14056161b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.074557 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/815eca00-0648-4421-8b14-0eb14056161b-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.074570 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efdf993e-c4c2-4eff-877d-03df2af9d43c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.074581 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlkv9\" (UniqueName: \"kubernetes.io/projected/815eca00-0648-4421-8b14-0eb14056161b-kube-api-access-tlkv9\") on node \"crc\" DevicePath \"\"" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.118795 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h8tj2"] Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.123281 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h8tj2"] Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.818331 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zn9dk" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.818483 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zn9dk" event={"ID":"bd8d3bba-bf4e-4bda-94ff-ce2902b3299a","Type":"ContainerDied","Data":"ef21dbd530cf63f03ebee62da4115986447472a8cc4fabe1d9dfadb6f291a233"} Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.818609 4706 scope.go:117] "RemoveContainer" containerID="e1d472d4907ff5bc21dee43ddf20267a8593cd34b3567fa36c0d083869575729" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.823267 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mlg4m" event={"ID":"efdf993e-c4c2-4eff-877d-03df2af9d43c","Type":"ContainerDied","Data":"f4aead7d5ef1bc8752fc92d2b7a2326b4b4fb1ad6fb45c05a7b16fc68e243458"} Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.823331 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mlg4m" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.830082 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qb6fx" event={"ID":"815eca00-0648-4421-8b14-0eb14056161b","Type":"ContainerDied","Data":"dc650eef70a93c07c8f236139bd933242eb8caafd96b2386b573034b2d6894a3"} Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.830261 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qb6fx" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.832589 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jx6l5" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.833055 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jx6l5" event={"ID":"9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05","Type":"ContainerDied","Data":"788bb0d564fd9bb151565b994bdae9610d6004ee5bf7cf0923037ccb47a32c8d"} Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.836752 4706 scope.go:117] "RemoveContainer" containerID="7da57e8e131a4bc2ca553fae2ec9034b55706ee63e4b9975717ee1758a3beca1" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.839818 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vnd8s" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.863405 4706 scope.go:117] "RemoveContainer" containerID="467370b7fa0c392998f8fa597d67bc6089ee8572b45eac38169fd17a8eb6f01a" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.863968 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k7lhm"] Nov 25 11:41:21 crc kubenswrapper[4706]: E1125 11:41:21.864251 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815eca00-0648-4421-8b14-0eb14056161b" containerName="registry-server" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.864265 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="815eca00-0648-4421-8b14-0eb14056161b" containerName="registry-server" Nov 25 11:41:21 crc kubenswrapper[4706]: E1125 11:41:21.864281 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05" containerName="extract-utilities" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.864287 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05" containerName="extract-utilities" Nov 25 11:41:21 crc kubenswrapper[4706]: E1125 11:41:21.864295 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efdf993e-c4c2-4eff-877d-03df2af9d43c" containerName="extract-content" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.864323 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="efdf993e-c4c2-4eff-877d-03df2af9d43c" containerName="extract-content" Nov 25 11:41:21 crc kubenswrapper[4706]: E1125 11:41:21.864332 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e636fb64-6a73-4a3d-84d3-d933046a68e0" containerName="registry-server" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.864339 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="e636fb64-6a73-4a3d-84d3-d933046a68e0" containerName="registry-server" Nov 25 11:41:21 crc kubenswrapper[4706]: E1125 11:41:21.864353 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05" containerName="extract-content" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.864358 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05" containerName="extract-content" Nov 25 11:41:21 crc kubenswrapper[4706]: E1125 11:41:21.864368 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e636fb64-6a73-4a3d-84d3-d933046a68e0" containerName="extract-content" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.864375 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="e636fb64-6a73-4a3d-84d3-d933046a68e0" containerName="extract-content" Nov 25 11:41:21 crc kubenswrapper[4706]: E1125 11:41:21.864397 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05" containerName="registry-server" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.864403 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05" containerName="registry-server" Nov 25 11:41:21 crc kubenswrapper[4706]: E1125 11:41:21.864410 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efdf993e-c4c2-4eff-877d-03df2af9d43c" containerName="extract-utilities" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.864416 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="efdf993e-c4c2-4eff-877d-03df2af9d43c" containerName="extract-utilities" Nov 25 11:41:21 crc kubenswrapper[4706]: E1125 11:41:21.864427 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815eca00-0648-4421-8b14-0eb14056161b" containerName="extract-utilities" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.864434 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="815eca00-0648-4421-8b14-0eb14056161b" containerName="extract-utilities" Nov 25 11:41:21 crc kubenswrapper[4706]: E1125 11:41:21.864444 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efdf993e-c4c2-4eff-877d-03df2af9d43c" containerName="registry-server" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.864450 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="efdf993e-c4c2-4eff-877d-03df2af9d43c" containerName="registry-server" Nov 25 11:41:21 crc kubenswrapper[4706]: E1125 11:41:21.864472 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd8d3bba-bf4e-4bda-94ff-ce2902b3299a" containerName="marketplace-operator" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.864478 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd8d3bba-bf4e-4bda-94ff-ce2902b3299a" containerName="marketplace-operator" Nov 25 11:41:21 crc kubenswrapper[4706]: E1125 11:41:21.864486 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815eca00-0648-4421-8b14-0eb14056161b" containerName="extract-content" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.864492 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="815eca00-0648-4421-8b14-0eb14056161b" containerName="extract-content" Nov 25 11:41:21 crc kubenswrapper[4706]: E1125 11:41:21.864502 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e636fb64-6a73-4a3d-84d3-d933046a68e0" containerName="extract-utilities" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.864508 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="e636fb64-6a73-4a3d-84d3-d933046a68e0" containerName="extract-utilities" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.864606 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05" containerName="registry-server" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.864629 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd8d3bba-bf4e-4bda-94ff-ce2902b3299a" containerName="marketplace-operator" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.864635 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="815eca00-0648-4421-8b14-0eb14056161b" containerName="registry-server" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.864646 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="efdf993e-c4c2-4eff-877d-03df2af9d43c" containerName="registry-server" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.864656 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="e636fb64-6a73-4a3d-84d3-d933046a68e0" containerName="registry-server" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.866514 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k7lhm" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.870526 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.879375 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k7lhm"] Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.907032 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zn9dk"] Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.909104 4706 scope.go:117] "RemoveContainer" containerID="9679f13319a663db6791ff433b25a3757b4c7799b8f52b1f54e03e0e8a6fcf1b" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.913046 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zn9dk"] Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.933105 4706 scope.go:117] "RemoveContainer" containerID="ffbc72cbf8c7c250bb4c30e3ede421c474934e8e926882dc57dc32473807d031" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.933890 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd8d3bba-bf4e-4bda-94ff-ce2902b3299a" path="/var/lib/kubelet/pods/bd8d3bba-bf4e-4bda-94ff-ce2902b3299a/volumes" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.934418 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e636fb64-6a73-4a3d-84d3-d933046a68e0" path="/var/lib/kubelet/pods/e636fb64-6a73-4a3d-84d3-d933046a68e0/volumes" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.934962 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qb6fx"] Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.942230 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qb6fx"] Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.950715 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jx6l5"] Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.953369 4706 scope.go:117] "RemoveContainer" containerID="1f7eefe90709b30a55c2e963a42ec856b229ed16c653ca620f60f9b556822691" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.953720 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jx6l5"] Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.961509 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mlg4m"] Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.966145 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mlg4m"] Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.982546 4706 scope.go:117] "RemoveContainer" containerID="f01485fcf492d85ef54a1f990172ab6d37d9e221169b0bb4bc8faada3c9544e1" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.986372 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f25c7d8b-b341-4fb2-bef0-e43d83905a9b-catalog-content\") pod \"certified-operators-k7lhm\" (UID: \"f25c7d8b-b341-4fb2-bef0-e43d83905a9b\") " pod="openshift-marketplace/certified-operators-k7lhm" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.986595 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh7qg\" (UniqueName: \"kubernetes.io/projected/f25c7d8b-b341-4fb2-bef0-e43d83905a9b-kube-api-access-jh7qg\") pod \"certified-operators-k7lhm\" (UID: \"f25c7d8b-b341-4fb2-bef0-e43d83905a9b\") " pod="openshift-marketplace/certified-operators-k7lhm" Nov 25 11:41:21 crc kubenswrapper[4706]: I1125 11:41:21.986673 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f25c7d8b-b341-4fb2-bef0-e43d83905a9b-utilities\") pod \"certified-operators-k7lhm\" (UID: \"f25c7d8b-b341-4fb2-bef0-e43d83905a9b\") " pod="openshift-marketplace/certified-operators-k7lhm" Nov 25 11:41:22 crc kubenswrapper[4706]: I1125 11:41:22.087444 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh7qg\" (UniqueName: \"kubernetes.io/projected/f25c7d8b-b341-4fb2-bef0-e43d83905a9b-kube-api-access-jh7qg\") pod \"certified-operators-k7lhm\" (UID: \"f25c7d8b-b341-4fb2-bef0-e43d83905a9b\") " pod="openshift-marketplace/certified-operators-k7lhm" Nov 25 11:41:22 crc kubenswrapper[4706]: I1125 11:41:22.087493 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f25c7d8b-b341-4fb2-bef0-e43d83905a9b-utilities\") pod \"certified-operators-k7lhm\" (UID: \"f25c7d8b-b341-4fb2-bef0-e43d83905a9b\") " pod="openshift-marketplace/certified-operators-k7lhm" Nov 25 11:41:22 crc kubenswrapper[4706]: I1125 11:41:22.087533 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f25c7d8b-b341-4fb2-bef0-e43d83905a9b-catalog-content\") pod \"certified-operators-k7lhm\" (UID: \"f25c7d8b-b341-4fb2-bef0-e43d83905a9b\") " pod="openshift-marketplace/certified-operators-k7lhm" Nov 25 11:41:22 crc kubenswrapper[4706]: I1125 11:41:22.088022 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f25c7d8b-b341-4fb2-bef0-e43d83905a9b-catalog-content\") pod \"certified-operators-k7lhm\" (UID: \"f25c7d8b-b341-4fb2-bef0-e43d83905a9b\") " pod="openshift-marketplace/certified-operators-k7lhm" Nov 25 11:41:22 crc kubenswrapper[4706]: I1125 11:41:22.088085 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f25c7d8b-b341-4fb2-bef0-e43d83905a9b-utilities\") pod \"certified-operators-k7lhm\" (UID: \"f25c7d8b-b341-4fb2-bef0-e43d83905a9b\") " pod="openshift-marketplace/certified-operators-k7lhm" Nov 25 11:41:22 crc kubenswrapper[4706]: I1125 11:41:22.107405 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh7qg\" (UniqueName: \"kubernetes.io/projected/f25c7d8b-b341-4fb2-bef0-e43d83905a9b-kube-api-access-jh7qg\") pod \"certified-operators-k7lhm\" (UID: \"f25c7d8b-b341-4fb2-bef0-e43d83905a9b\") " pod="openshift-marketplace/certified-operators-k7lhm" Nov 25 11:41:22 crc kubenswrapper[4706]: I1125 11:41:22.208488 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k7lhm" Nov 25 11:41:22 crc kubenswrapper[4706]: I1125 11:41:22.431115 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k7lhm"] Nov 25 11:41:22 crc kubenswrapper[4706]: W1125 11:41:22.440852 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf25c7d8b_b341_4fb2_bef0_e43d83905a9b.slice/crio-32af05d38efc368aaa54c22d90ea76d760eb508cdb3bbd58bf09b0d1741bcee1 WatchSource:0}: Error finding container 32af05d38efc368aaa54c22d90ea76d760eb508cdb3bbd58bf09b0d1741bcee1: Status 404 returned error can't find the container with id 32af05d38efc368aaa54c22d90ea76d760eb508cdb3bbd58bf09b0d1741bcee1 Nov 25 11:41:22 crc kubenswrapper[4706]: I1125 11:41:22.460496 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q9pfj"] Nov 25 11:41:22 crc kubenswrapper[4706]: I1125 11:41:22.461673 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q9pfj" Nov 25 11:41:22 crc kubenswrapper[4706]: I1125 11:41:22.468183 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 25 11:41:22 crc kubenswrapper[4706]: I1125 11:41:22.471278 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q9pfj"] Nov 25 11:41:22 crc kubenswrapper[4706]: I1125 11:41:22.596843 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ade36961-cf56-40fd-9d5b-202d3e937bfd-utilities\") pod \"redhat-marketplace-q9pfj\" (UID: \"ade36961-cf56-40fd-9d5b-202d3e937bfd\") " pod="openshift-marketplace/redhat-marketplace-q9pfj" Nov 25 11:41:22 crc kubenswrapper[4706]: I1125 11:41:22.597244 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ade36961-cf56-40fd-9d5b-202d3e937bfd-catalog-content\") pod \"redhat-marketplace-q9pfj\" (UID: \"ade36961-cf56-40fd-9d5b-202d3e937bfd\") " pod="openshift-marketplace/redhat-marketplace-q9pfj" Nov 25 11:41:22 crc kubenswrapper[4706]: I1125 11:41:22.597265 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjk65\" (UniqueName: \"kubernetes.io/projected/ade36961-cf56-40fd-9d5b-202d3e937bfd-kube-api-access-mjk65\") pod \"redhat-marketplace-q9pfj\" (UID: \"ade36961-cf56-40fd-9d5b-202d3e937bfd\") " pod="openshift-marketplace/redhat-marketplace-q9pfj" Nov 25 11:41:22 crc kubenswrapper[4706]: I1125 11:41:22.698645 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ade36961-cf56-40fd-9d5b-202d3e937bfd-utilities\") pod \"redhat-marketplace-q9pfj\" (UID: \"ade36961-cf56-40fd-9d5b-202d3e937bfd\") " pod="openshift-marketplace/redhat-marketplace-q9pfj" Nov 25 11:41:22 crc kubenswrapper[4706]: I1125 11:41:22.698920 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ade36961-cf56-40fd-9d5b-202d3e937bfd-catalog-content\") pod \"redhat-marketplace-q9pfj\" (UID: \"ade36961-cf56-40fd-9d5b-202d3e937bfd\") " pod="openshift-marketplace/redhat-marketplace-q9pfj" Nov 25 11:41:22 crc kubenswrapper[4706]: I1125 11:41:22.698963 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjk65\" (UniqueName: \"kubernetes.io/projected/ade36961-cf56-40fd-9d5b-202d3e937bfd-kube-api-access-mjk65\") pod \"redhat-marketplace-q9pfj\" (UID: \"ade36961-cf56-40fd-9d5b-202d3e937bfd\") " pod="openshift-marketplace/redhat-marketplace-q9pfj" Nov 25 11:41:22 crc kubenswrapper[4706]: I1125 11:41:22.699207 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ade36961-cf56-40fd-9d5b-202d3e937bfd-utilities\") pod \"redhat-marketplace-q9pfj\" (UID: \"ade36961-cf56-40fd-9d5b-202d3e937bfd\") " pod="openshift-marketplace/redhat-marketplace-q9pfj" Nov 25 11:41:22 crc kubenswrapper[4706]: I1125 11:41:22.699601 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ade36961-cf56-40fd-9d5b-202d3e937bfd-catalog-content\") pod \"redhat-marketplace-q9pfj\" (UID: \"ade36961-cf56-40fd-9d5b-202d3e937bfd\") " pod="openshift-marketplace/redhat-marketplace-q9pfj" Nov 25 11:41:22 crc kubenswrapper[4706]: I1125 11:41:22.720181 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjk65\" (UniqueName: \"kubernetes.io/projected/ade36961-cf56-40fd-9d5b-202d3e937bfd-kube-api-access-mjk65\") pod \"redhat-marketplace-q9pfj\" (UID: \"ade36961-cf56-40fd-9d5b-202d3e937bfd\") " pod="openshift-marketplace/redhat-marketplace-q9pfj" Nov 25 11:41:22 crc kubenswrapper[4706]: I1125 11:41:22.786203 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q9pfj" Nov 25 11:41:22 crc kubenswrapper[4706]: I1125 11:41:22.843557 4706 generic.go:334] "Generic (PLEG): container finished" podID="f25c7d8b-b341-4fb2-bef0-e43d83905a9b" containerID="e9d5bfc359dafd77cdb589c3d083f199125151fb7a3c4671eb8d4d748b3a791e" exitCode=0 Nov 25 11:41:22 crc kubenswrapper[4706]: I1125 11:41:22.843673 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k7lhm" event={"ID":"f25c7d8b-b341-4fb2-bef0-e43d83905a9b","Type":"ContainerDied","Data":"e9d5bfc359dafd77cdb589c3d083f199125151fb7a3c4671eb8d4d748b3a791e"} Nov 25 11:41:22 crc kubenswrapper[4706]: I1125 11:41:22.843718 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k7lhm" event={"ID":"f25c7d8b-b341-4fb2-bef0-e43d83905a9b","Type":"ContainerStarted","Data":"32af05d38efc368aaa54c22d90ea76d760eb508cdb3bbd58bf09b0d1741bcee1"} Nov 25 11:41:22 crc kubenswrapper[4706]: I1125 11:41:22.985902 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q9pfj"] Nov 25 11:41:23 crc kubenswrapper[4706]: W1125 11:41:23.000667 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podade36961_cf56_40fd_9d5b_202d3e937bfd.slice/crio-2ff056ba7c102a025a2e6e463ddfc46c9707f588c262cd6f9475c18a7c734bdf WatchSource:0}: Error finding container 2ff056ba7c102a025a2e6e463ddfc46c9707f588c262cd6f9475c18a7c734bdf: Status 404 returned error can't find the container with id 2ff056ba7c102a025a2e6e463ddfc46c9707f588c262cd6f9475c18a7c734bdf Nov 25 11:41:23 crc kubenswrapper[4706]: I1125 11:41:23.853795 4706 generic.go:334] "Generic (PLEG): container finished" podID="ade36961-cf56-40fd-9d5b-202d3e937bfd" containerID="f519a690b9d6e019a9602a98d455ee5b3fbdf19d345de90c829da588acedf54d" exitCode=0 Nov 25 11:41:23 crc kubenswrapper[4706]: I1125 11:41:23.853907 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q9pfj" event={"ID":"ade36961-cf56-40fd-9d5b-202d3e937bfd","Type":"ContainerDied","Data":"f519a690b9d6e019a9602a98d455ee5b3fbdf19d345de90c829da588acedf54d"} Nov 25 11:41:23 crc kubenswrapper[4706]: I1125 11:41:23.854144 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q9pfj" event={"ID":"ade36961-cf56-40fd-9d5b-202d3e937bfd","Type":"ContainerStarted","Data":"2ff056ba7c102a025a2e6e463ddfc46c9707f588c262cd6f9475c18a7c734bdf"} Nov 25 11:41:23 crc kubenswrapper[4706]: I1125 11:41:23.856106 4706 generic.go:334] "Generic (PLEG): container finished" podID="f25c7d8b-b341-4fb2-bef0-e43d83905a9b" containerID="9870dd2d94497a0583186ca7d26da1723fc51e51b4f5615f9a786551a0b147f4" exitCode=0 Nov 25 11:41:23 crc kubenswrapper[4706]: I1125 11:41:23.856132 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k7lhm" event={"ID":"f25c7d8b-b341-4fb2-bef0-e43d83905a9b","Type":"ContainerDied","Data":"9870dd2d94497a0583186ca7d26da1723fc51e51b4f5615f9a786551a0b147f4"} Nov 25 11:41:23 crc kubenswrapper[4706]: I1125 11:41:23.929164 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="815eca00-0648-4421-8b14-0eb14056161b" path="/var/lib/kubelet/pods/815eca00-0648-4421-8b14-0eb14056161b/volumes" Nov 25 11:41:23 crc kubenswrapper[4706]: I1125 11:41:23.930051 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05" path="/var/lib/kubelet/pods/9ba1f6b2-ea89-4d9b-aad8-b18eaba9ed05/volumes" Nov 25 11:41:23 crc kubenswrapper[4706]: I1125 11:41:23.930644 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdf993e-c4c2-4eff-877d-03df2af9d43c" path="/var/lib/kubelet/pods/efdf993e-c4c2-4eff-877d-03df2af9d43c/volumes" Nov 25 11:41:24 crc kubenswrapper[4706]: I1125 11:41:24.267219 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-942d2"] Nov 25 11:41:24 crc kubenswrapper[4706]: I1125 11:41:24.268728 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-942d2" Nov 25 11:41:24 crc kubenswrapper[4706]: I1125 11:41:24.274095 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 25 11:41:24 crc kubenswrapper[4706]: I1125 11:41:24.274529 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-942d2"] Nov 25 11:41:24 crc kubenswrapper[4706]: I1125 11:41:24.320073 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49-utilities\") pod \"redhat-operators-942d2\" (UID: \"35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49\") " pod="openshift-marketplace/redhat-operators-942d2" Nov 25 11:41:24 crc kubenswrapper[4706]: I1125 11:41:24.320188 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrr4s\" (UniqueName: \"kubernetes.io/projected/35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49-kube-api-access-hrr4s\") pod \"redhat-operators-942d2\" (UID: \"35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49\") " pod="openshift-marketplace/redhat-operators-942d2" Nov 25 11:41:24 crc kubenswrapper[4706]: I1125 11:41:24.320216 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49-catalog-content\") pod \"redhat-operators-942d2\" (UID: \"35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49\") " pod="openshift-marketplace/redhat-operators-942d2" Nov 25 11:41:24 crc kubenswrapper[4706]: I1125 11:41:24.422151 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrr4s\" (UniqueName: \"kubernetes.io/projected/35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49-kube-api-access-hrr4s\") pod \"redhat-operators-942d2\" (UID: \"35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49\") " pod="openshift-marketplace/redhat-operators-942d2" Nov 25 11:41:24 crc kubenswrapper[4706]: I1125 11:41:24.422250 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49-catalog-content\") pod \"redhat-operators-942d2\" (UID: \"35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49\") " pod="openshift-marketplace/redhat-operators-942d2" Nov 25 11:41:24 crc kubenswrapper[4706]: I1125 11:41:24.422331 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49-utilities\") pod \"redhat-operators-942d2\" (UID: \"35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49\") " pod="openshift-marketplace/redhat-operators-942d2" Nov 25 11:41:24 crc kubenswrapper[4706]: I1125 11:41:24.422978 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49-catalog-content\") pod \"redhat-operators-942d2\" (UID: \"35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49\") " pod="openshift-marketplace/redhat-operators-942d2" Nov 25 11:41:24 crc kubenswrapper[4706]: I1125 11:41:24.423026 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49-utilities\") pod \"redhat-operators-942d2\" (UID: \"35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49\") " pod="openshift-marketplace/redhat-operators-942d2" Nov 25 11:41:24 crc kubenswrapper[4706]: I1125 11:41:24.446942 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrr4s\" (UniqueName: \"kubernetes.io/projected/35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49-kube-api-access-hrr4s\") pod \"redhat-operators-942d2\" (UID: \"35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49\") " pod="openshift-marketplace/redhat-operators-942d2" Nov 25 11:41:24 crc kubenswrapper[4706]: I1125 11:41:24.591780 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-942d2" Nov 25 11:41:24 crc kubenswrapper[4706]: I1125 11:41:24.882224 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k7lhm" event={"ID":"f25c7d8b-b341-4fb2-bef0-e43d83905a9b","Type":"ContainerStarted","Data":"23c716f56d065e93a08a84e481d81802552724bb5773f323a56a534a4d6cd58b"} Nov 25 11:41:24 crc kubenswrapper[4706]: I1125 11:41:24.883004 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fq7cn"] Nov 25 11:41:24 crc kubenswrapper[4706]: I1125 11:41:24.885147 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fq7cn" Nov 25 11:41:24 crc kubenswrapper[4706]: I1125 11:41:24.889021 4706 generic.go:334] "Generic (PLEG): container finished" podID="ade36961-cf56-40fd-9d5b-202d3e937bfd" containerID="415f226bae66bbd47eab2b0557532d622db4dfc3ed09e1c1e78d6e756f277cf8" exitCode=0 Nov 25 11:41:24 crc kubenswrapper[4706]: I1125 11:41:24.889099 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q9pfj" event={"ID":"ade36961-cf56-40fd-9d5b-202d3e937bfd","Type":"ContainerDied","Data":"415f226bae66bbd47eab2b0557532d622db4dfc3ed09e1c1e78d6e756f277cf8"} Nov 25 11:41:24 crc kubenswrapper[4706]: I1125 11:41:24.889869 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 25 11:41:24 crc kubenswrapper[4706]: I1125 11:41:24.895249 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-942d2"] Nov 25 11:41:24 crc kubenswrapper[4706]: I1125 11:41:24.898157 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fq7cn"] Nov 25 11:41:24 crc kubenswrapper[4706]: W1125 11:41:24.902072 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35b0ea9c_5ad8_4d74_a2ce_8d59e3a60f49.slice/crio-ca269415ac5d0dda76bd0c7102e4a0f44004d4516854177ee0f77c4b04006b1b WatchSource:0}: Error finding container ca269415ac5d0dda76bd0c7102e4a0f44004d4516854177ee0f77c4b04006b1b: Status 404 returned error can't find the container with id ca269415ac5d0dda76bd0c7102e4a0f44004d4516854177ee0f77c4b04006b1b Nov 25 11:41:24 crc kubenswrapper[4706]: I1125 11:41:24.912669 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k7lhm" podStartSLOduration=2.437387381 podStartE2EDuration="3.912638676s" podCreationTimestamp="2025-11-25 11:41:21 +0000 UTC" firstStartedPulling="2025-11-25 11:41:22.845121278 +0000 UTC m=+291.759678649" lastFinishedPulling="2025-11-25 11:41:24.320372563 +0000 UTC m=+293.234929944" observedRunningTime="2025-11-25 11:41:24.908701616 +0000 UTC m=+293.823259027" watchObservedRunningTime="2025-11-25 11:41:24.912638676 +0000 UTC m=+293.827196057" Nov 25 11:41:24 crc kubenswrapper[4706]: I1125 11:41:24.929756 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2mtk\" (UniqueName: \"kubernetes.io/projected/8e544967-24c9-4190-a1d7-5ed07fdaaeef-kube-api-access-p2mtk\") pod \"community-operators-fq7cn\" (UID: \"8e544967-24c9-4190-a1d7-5ed07fdaaeef\") " pod="openshift-marketplace/community-operators-fq7cn" Nov 25 11:41:24 crc kubenswrapper[4706]: I1125 11:41:24.929823 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e544967-24c9-4190-a1d7-5ed07fdaaeef-catalog-content\") pod \"community-operators-fq7cn\" (UID: \"8e544967-24c9-4190-a1d7-5ed07fdaaeef\") " pod="openshift-marketplace/community-operators-fq7cn" Nov 25 11:41:24 crc kubenswrapper[4706]: I1125 11:41:24.929958 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e544967-24c9-4190-a1d7-5ed07fdaaeef-utilities\") pod \"community-operators-fq7cn\" (UID: \"8e544967-24c9-4190-a1d7-5ed07fdaaeef\") " pod="openshift-marketplace/community-operators-fq7cn" Nov 25 11:41:25 crc kubenswrapper[4706]: I1125 11:41:25.031376 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2mtk\" (UniqueName: \"kubernetes.io/projected/8e544967-24c9-4190-a1d7-5ed07fdaaeef-kube-api-access-p2mtk\") pod \"community-operators-fq7cn\" (UID: \"8e544967-24c9-4190-a1d7-5ed07fdaaeef\") " pod="openshift-marketplace/community-operators-fq7cn" Nov 25 11:41:25 crc kubenswrapper[4706]: I1125 11:41:25.031856 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e544967-24c9-4190-a1d7-5ed07fdaaeef-catalog-content\") pod \"community-operators-fq7cn\" (UID: \"8e544967-24c9-4190-a1d7-5ed07fdaaeef\") " pod="openshift-marketplace/community-operators-fq7cn" Nov 25 11:41:25 crc kubenswrapper[4706]: I1125 11:41:25.031944 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e544967-24c9-4190-a1d7-5ed07fdaaeef-utilities\") pod \"community-operators-fq7cn\" (UID: \"8e544967-24c9-4190-a1d7-5ed07fdaaeef\") " pod="openshift-marketplace/community-operators-fq7cn" Nov 25 11:41:25 crc kubenswrapper[4706]: I1125 11:41:25.032850 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e544967-24c9-4190-a1d7-5ed07fdaaeef-catalog-content\") pod \"community-operators-fq7cn\" (UID: \"8e544967-24c9-4190-a1d7-5ed07fdaaeef\") " pod="openshift-marketplace/community-operators-fq7cn" Nov 25 11:41:25 crc kubenswrapper[4706]: I1125 11:41:25.032926 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e544967-24c9-4190-a1d7-5ed07fdaaeef-utilities\") pod \"community-operators-fq7cn\" (UID: \"8e544967-24c9-4190-a1d7-5ed07fdaaeef\") " pod="openshift-marketplace/community-operators-fq7cn" Nov 25 11:41:25 crc kubenswrapper[4706]: I1125 11:41:25.057031 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2mtk\" (UniqueName: \"kubernetes.io/projected/8e544967-24c9-4190-a1d7-5ed07fdaaeef-kube-api-access-p2mtk\") pod \"community-operators-fq7cn\" (UID: \"8e544967-24c9-4190-a1d7-5ed07fdaaeef\") " pod="openshift-marketplace/community-operators-fq7cn" Nov 25 11:41:25 crc kubenswrapper[4706]: I1125 11:41:25.215941 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fq7cn" Nov 25 11:41:25 crc kubenswrapper[4706]: I1125 11:41:25.438640 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fq7cn"] Nov 25 11:41:25 crc kubenswrapper[4706]: I1125 11:41:25.897379 4706 generic.go:334] "Generic (PLEG): container finished" podID="8e544967-24c9-4190-a1d7-5ed07fdaaeef" containerID="7ea6b201e01f7e1f9bd23d54ba790b5fcf0923f6fe9ee76261539448aebe471b" exitCode=0 Nov 25 11:41:25 crc kubenswrapper[4706]: I1125 11:41:25.897459 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fq7cn" event={"ID":"8e544967-24c9-4190-a1d7-5ed07fdaaeef","Type":"ContainerDied","Data":"7ea6b201e01f7e1f9bd23d54ba790b5fcf0923f6fe9ee76261539448aebe471b"} Nov 25 11:41:25 crc kubenswrapper[4706]: I1125 11:41:25.897492 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fq7cn" event={"ID":"8e544967-24c9-4190-a1d7-5ed07fdaaeef","Type":"ContainerStarted","Data":"ca0899d664de1944a31be235d6d3a94066c1bcc2a3dc1e6eef0939734b897dea"} Nov 25 11:41:25 crc kubenswrapper[4706]: I1125 11:41:25.906510 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q9pfj" event={"ID":"ade36961-cf56-40fd-9d5b-202d3e937bfd","Type":"ContainerStarted","Data":"61c8ef4ee11eba1c90c548177b05b64c19f75ccca44ac446cc7c0bca53a2e31f"} Nov 25 11:41:25 crc kubenswrapper[4706]: I1125 11:41:25.911441 4706 generic.go:334] "Generic (PLEG): container finished" podID="35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49" containerID="942e0b26ce986512a943c232ef66f8b6af87f039ae5d3111ce7113ed03a8afcc" exitCode=0 Nov 25 11:41:25 crc kubenswrapper[4706]: I1125 11:41:25.911511 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-942d2" event={"ID":"35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49","Type":"ContainerDied","Data":"942e0b26ce986512a943c232ef66f8b6af87f039ae5d3111ce7113ed03a8afcc"} Nov 25 11:41:25 crc kubenswrapper[4706]: I1125 11:41:25.911600 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-942d2" event={"ID":"35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49","Type":"ContainerStarted","Data":"ca269415ac5d0dda76bd0c7102e4a0f44004d4516854177ee0f77c4b04006b1b"} Nov 25 11:41:25 crc kubenswrapper[4706]: I1125 11:41:25.951042 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q9pfj" podStartSLOduration=2.464365184 podStartE2EDuration="3.951019441s" podCreationTimestamp="2025-11-25 11:41:22 +0000 UTC" firstStartedPulling="2025-11-25 11:41:23.855402571 +0000 UTC m=+292.769959952" lastFinishedPulling="2025-11-25 11:41:25.342056808 +0000 UTC m=+294.256614209" observedRunningTime="2025-11-25 11:41:25.94919914 +0000 UTC m=+294.863756521" watchObservedRunningTime="2025-11-25 11:41:25.951019441 +0000 UTC m=+294.865576832" Nov 25 11:41:26 crc kubenswrapper[4706]: I1125 11:41:26.941645 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fq7cn" event={"ID":"8e544967-24c9-4190-a1d7-5ed07fdaaeef","Type":"ContainerStarted","Data":"9c85979f71b5cc22976c368c726c203551543a7af36d72687de8991c2af56273"} Nov 25 11:41:27 crc kubenswrapper[4706]: I1125 11:41:27.955822 4706 generic.go:334] "Generic (PLEG): container finished" podID="8e544967-24c9-4190-a1d7-5ed07fdaaeef" containerID="9c85979f71b5cc22976c368c726c203551543a7af36d72687de8991c2af56273" exitCode=0 Nov 25 11:41:27 crc kubenswrapper[4706]: I1125 11:41:27.955901 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fq7cn" event={"ID":"8e544967-24c9-4190-a1d7-5ed07fdaaeef","Type":"ContainerDied","Data":"9c85979f71b5cc22976c368c726c203551543a7af36d72687de8991c2af56273"} Nov 25 11:41:27 crc kubenswrapper[4706]: I1125 11:41:27.957881 4706 generic.go:334] "Generic (PLEG): container finished" podID="35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49" containerID="d2276bdce9a2332424fbe4c644b9174b3576145aa2defe52212632625b5cf6d3" exitCode=0 Nov 25 11:41:27 crc kubenswrapper[4706]: I1125 11:41:27.957935 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-942d2" event={"ID":"35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49","Type":"ContainerDied","Data":"d2276bdce9a2332424fbe4c644b9174b3576145aa2defe52212632625b5cf6d3"} Nov 25 11:41:28 crc kubenswrapper[4706]: I1125 11:41:28.966102 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fq7cn" event={"ID":"8e544967-24c9-4190-a1d7-5ed07fdaaeef","Type":"ContainerStarted","Data":"ebd0c6a3315a4ce541f20bcda3c3d4b4b983f04d906c14489da2104a351159cc"} Nov 25 11:41:28 crc kubenswrapper[4706]: I1125 11:41:28.967791 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-942d2" event={"ID":"35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49","Type":"ContainerStarted","Data":"e133ff4c9a278dd34918625a1aca782c284818404f5841b1037dca0777466304"} Nov 25 11:41:28 crc kubenswrapper[4706]: I1125 11:41:28.985879 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fq7cn" podStartSLOduration=2.459139197 podStartE2EDuration="4.985857575s" podCreationTimestamp="2025-11-25 11:41:24 +0000 UTC" firstStartedPulling="2025-11-25 11:41:25.902534147 +0000 UTC m=+294.817091528" lastFinishedPulling="2025-11-25 11:41:28.429252515 +0000 UTC m=+297.343809906" observedRunningTime="2025-11-25 11:41:28.98427905 +0000 UTC m=+297.898836451" watchObservedRunningTime="2025-11-25 11:41:28.985857575 +0000 UTC m=+297.900414966" Nov 25 11:41:29 crc kubenswrapper[4706]: I1125 11:41:29.011150 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-942d2" podStartSLOduration=2.523126257 podStartE2EDuration="5.011124546s" podCreationTimestamp="2025-11-25 11:41:24 +0000 UTC" firstStartedPulling="2025-11-25 11:41:25.913773213 +0000 UTC m=+294.828330584" lastFinishedPulling="2025-11-25 11:41:28.401771492 +0000 UTC m=+297.316328873" observedRunningTime="2025-11-25 11:41:29.00701301 +0000 UTC m=+297.921570401" watchObservedRunningTime="2025-11-25 11:41:29.011124546 +0000 UTC m=+297.925681927" Nov 25 11:41:32 crc kubenswrapper[4706]: I1125 11:41:32.208913 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k7lhm" Nov 25 11:41:32 crc kubenswrapper[4706]: I1125 11:41:32.209901 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k7lhm" Nov 25 11:41:32 crc kubenswrapper[4706]: I1125 11:41:32.262323 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k7lhm" Nov 25 11:41:32 crc kubenswrapper[4706]: I1125 11:41:32.787239 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q9pfj" Nov 25 11:41:32 crc kubenswrapper[4706]: I1125 11:41:32.787342 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q9pfj" Nov 25 11:41:32 crc kubenswrapper[4706]: I1125 11:41:32.832349 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q9pfj" Nov 25 11:41:33 crc kubenswrapper[4706]: I1125 11:41:33.035173 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k7lhm" Nov 25 11:41:33 crc kubenswrapper[4706]: I1125 11:41:33.037164 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q9pfj" Nov 25 11:41:34 crc kubenswrapper[4706]: I1125 11:41:34.592849 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-942d2" Nov 25 11:41:34 crc kubenswrapper[4706]: I1125 11:41:34.593422 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-942d2" Nov 25 11:41:34 crc kubenswrapper[4706]: I1125 11:41:34.639975 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-942d2" Nov 25 11:41:35 crc kubenswrapper[4706]: I1125 11:41:35.063437 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-942d2" Nov 25 11:41:35 crc kubenswrapper[4706]: I1125 11:41:35.218232 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fq7cn" Nov 25 11:41:35 crc kubenswrapper[4706]: I1125 11:41:35.218314 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fq7cn" Nov 25 11:41:35 crc kubenswrapper[4706]: I1125 11:41:35.255172 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fq7cn" Nov 25 11:41:36 crc kubenswrapper[4706]: I1125 11:41:36.052496 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fq7cn" Nov 25 11:42:31 crc kubenswrapper[4706]: I1125 11:42:31.125764 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 11:42:31 crc kubenswrapper[4706]: I1125 11:42:31.126554 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 11:43:01 crc kubenswrapper[4706]: I1125 11:43:01.125087 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 11:43:01 crc kubenswrapper[4706]: I1125 11:43:01.125865 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 11:43:31 crc kubenswrapper[4706]: I1125 11:43:31.124821 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 11:43:31 crc kubenswrapper[4706]: I1125 11:43:31.125667 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 11:43:31 crc kubenswrapper[4706]: I1125 11:43:31.125731 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" Nov 25 11:43:31 crc kubenswrapper[4706]: I1125 11:43:31.126521 4706 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c43009691a1ca998131689b9f478affb1596618b922c6332af076407a2828da9"} pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 11:43:31 crc kubenswrapper[4706]: I1125 11:43:31.126601 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" containerID="cri-o://c43009691a1ca998131689b9f478affb1596618b922c6332af076407a2828da9" gracePeriod=600 Nov 25 11:43:31 crc kubenswrapper[4706]: I1125 11:43:31.746641 4706 generic.go:334] "Generic (PLEG): container finished" podID="0930887a-320c-4506-8c9c-f94d6d64516a" containerID="c43009691a1ca998131689b9f478affb1596618b922c6332af076407a2828da9" exitCode=0 Nov 25 11:43:31 crc kubenswrapper[4706]: I1125 11:43:31.746743 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" event={"ID":"0930887a-320c-4506-8c9c-f94d6d64516a","Type":"ContainerDied","Data":"c43009691a1ca998131689b9f478affb1596618b922c6332af076407a2828da9"} Nov 25 11:43:31 crc kubenswrapper[4706]: I1125 11:43:31.747494 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" event={"ID":"0930887a-320c-4506-8c9c-f94d6d64516a","Type":"ContainerStarted","Data":"0dd63e85870564c9c1e19ba8f686c8d7b197f9c962efb9def7912bf046e425dd"} Nov 25 11:43:31 crc kubenswrapper[4706]: I1125 11:43:31.747539 4706 scope.go:117] "RemoveContainer" containerID="86f4bfd310c27ea3b77c2f58c91e153db5f1794871a3fbeb5711cc119aa81e38" Nov 25 11:44:04 crc kubenswrapper[4706]: I1125 11:44:04.635564 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2csd2"] Nov 25 11:44:04 crc kubenswrapper[4706]: I1125 11:44:04.637540 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-2csd2" Nov 25 11:44:04 crc kubenswrapper[4706]: I1125 11:44:04.648779 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2csd2"] Nov 25 11:44:04 crc kubenswrapper[4706]: I1125 11:44:04.711900 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f078a954-b189-4c04-a72d-21f5d6d1b782-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2csd2\" (UID: \"f078a954-b189-4c04-a72d-21f5d6d1b782\") " pod="openshift-image-registry/image-registry-66df7c8f76-2csd2" Nov 25 11:44:04 crc kubenswrapper[4706]: I1125 11:44:04.711970 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f078a954-b189-4c04-a72d-21f5d6d1b782-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2csd2\" (UID: \"f078a954-b189-4c04-a72d-21f5d6d1b782\") " pod="openshift-image-registry/image-registry-66df7c8f76-2csd2" Nov 25 11:44:04 crc kubenswrapper[4706]: I1125 11:44:04.711996 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f078a954-b189-4c04-a72d-21f5d6d1b782-registry-certificates\") pod \"image-registry-66df7c8f76-2csd2\" (UID: \"f078a954-b189-4c04-a72d-21f5d6d1b782\") " pod="openshift-image-registry/image-registry-66df7c8f76-2csd2" Nov 25 11:44:04 crc kubenswrapper[4706]: I1125 11:44:04.712026 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f078a954-b189-4c04-a72d-21f5d6d1b782-registry-tls\") pod \"image-registry-66df7c8f76-2csd2\" (UID: \"f078a954-b189-4c04-a72d-21f5d6d1b782\") " pod="openshift-image-registry/image-registry-66df7c8f76-2csd2" Nov 25 11:44:04 crc kubenswrapper[4706]: I1125 11:44:04.712047 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f078a954-b189-4c04-a72d-21f5d6d1b782-bound-sa-token\") pod \"image-registry-66df7c8f76-2csd2\" (UID: \"f078a954-b189-4c04-a72d-21f5d6d1b782\") " pod="openshift-image-registry/image-registry-66df7c8f76-2csd2" Nov 25 11:44:04 crc kubenswrapper[4706]: I1125 11:44:04.712097 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgbpg\" (UniqueName: \"kubernetes.io/projected/f078a954-b189-4c04-a72d-21f5d6d1b782-kube-api-access-jgbpg\") pod \"image-registry-66df7c8f76-2csd2\" (UID: \"f078a954-b189-4c04-a72d-21f5d6d1b782\") " pod="openshift-image-registry/image-registry-66df7c8f76-2csd2" Nov 25 11:44:04 crc kubenswrapper[4706]: I1125 11:44:04.712133 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f078a954-b189-4c04-a72d-21f5d6d1b782-trusted-ca\") pod \"image-registry-66df7c8f76-2csd2\" (UID: \"f078a954-b189-4c04-a72d-21f5d6d1b782\") " pod="openshift-image-registry/image-registry-66df7c8f76-2csd2" Nov 25 11:44:04 crc kubenswrapper[4706]: I1125 11:44:04.712169 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-2csd2\" (UID: \"f078a954-b189-4c04-a72d-21f5d6d1b782\") " pod="openshift-image-registry/image-registry-66df7c8f76-2csd2" Nov 25 11:44:04 crc kubenswrapper[4706]: I1125 11:44:04.771375 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-2csd2\" (UID: \"f078a954-b189-4c04-a72d-21f5d6d1b782\") " pod="openshift-image-registry/image-registry-66df7c8f76-2csd2" Nov 25 11:44:04 crc kubenswrapper[4706]: I1125 11:44:04.814167 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f078a954-b189-4c04-a72d-21f5d6d1b782-registry-tls\") pod \"image-registry-66df7c8f76-2csd2\" (UID: \"f078a954-b189-4c04-a72d-21f5d6d1b782\") " pod="openshift-image-registry/image-registry-66df7c8f76-2csd2" Nov 25 11:44:04 crc kubenswrapper[4706]: I1125 11:44:04.814214 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f078a954-b189-4c04-a72d-21f5d6d1b782-bound-sa-token\") pod \"image-registry-66df7c8f76-2csd2\" (UID: \"f078a954-b189-4c04-a72d-21f5d6d1b782\") " pod="openshift-image-registry/image-registry-66df7c8f76-2csd2" Nov 25 11:44:04 crc kubenswrapper[4706]: I1125 11:44:04.814259 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgbpg\" (UniqueName: \"kubernetes.io/projected/f078a954-b189-4c04-a72d-21f5d6d1b782-kube-api-access-jgbpg\") pod \"image-registry-66df7c8f76-2csd2\" (UID: \"f078a954-b189-4c04-a72d-21f5d6d1b782\") " pod="openshift-image-registry/image-registry-66df7c8f76-2csd2" Nov 25 11:44:04 crc kubenswrapper[4706]: I1125 11:44:04.814284 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f078a954-b189-4c04-a72d-21f5d6d1b782-trusted-ca\") pod \"image-registry-66df7c8f76-2csd2\" (UID: \"f078a954-b189-4c04-a72d-21f5d6d1b782\") " pod="openshift-image-registry/image-registry-66df7c8f76-2csd2" Nov 25 11:44:04 crc kubenswrapper[4706]: I1125 11:44:04.814340 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f078a954-b189-4c04-a72d-21f5d6d1b782-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2csd2\" (UID: \"f078a954-b189-4c04-a72d-21f5d6d1b782\") " pod="openshift-image-registry/image-registry-66df7c8f76-2csd2" Nov 25 11:44:04 crc kubenswrapper[4706]: I1125 11:44:04.814364 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f078a954-b189-4c04-a72d-21f5d6d1b782-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2csd2\" (UID: \"f078a954-b189-4c04-a72d-21f5d6d1b782\") " pod="openshift-image-registry/image-registry-66df7c8f76-2csd2" Nov 25 11:44:04 crc kubenswrapper[4706]: I1125 11:44:04.814382 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f078a954-b189-4c04-a72d-21f5d6d1b782-registry-certificates\") pod \"image-registry-66df7c8f76-2csd2\" (UID: \"f078a954-b189-4c04-a72d-21f5d6d1b782\") " pod="openshift-image-registry/image-registry-66df7c8f76-2csd2" Nov 25 11:44:04 crc kubenswrapper[4706]: I1125 11:44:04.815074 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f078a954-b189-4c04-a72d-21f5d6d1b782-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2csd2\" (UID: \"f078a954-b189-4c04-a72d-21f5d6d1b782\") " pod="openshift-image-registry/image-registry-66df7c8f76-2csd2" Nov 25 11:44:04 crc kubenswrapper[4706]: I1125 11:44:04.815798 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f078a954-b189-4c04-a72d-21f5d6d1b782-registry-certificates\") pod \"image-registry-66df7c8f76-2csd2\" (UID: \"f078a954-b189-4c04-a72d-21f5d6d1b782\") " pod="openshift-image-registry/image-registry-66df7c8f76-2csd2" Nov 25 11:44:04 crc kubenswrapper[4706]: I1125 11:44:04.815938 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f078a954-b189-4c04-a72d-21f5d6d1b782-trusted-ca\") pod \"image-registry-66df7c8f76-2csd2\" (UID: \"f078a954-b189-4c04-a72d-21f5d6d1b782\") " pod="openshift-image-registry/image-registry-66df7c8f76-2csd2" Nov 25 11:44:04 crc kubenswrapper[4706]: I1125 11:44:04.821159 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f078a954-b189-4c04-a72d-21f5d6d1b782-registry-tls\") pod \"image-registry-66df7c8f76-2csd2\" (UID: \"f078a954-b189-4c04-a72d-21f5d6d1b782\") " pod="openshift-image-registry/image-registry-66df7c8f76-2csd2" Nov 25 11:44:04 crc kubenswrapper[4706]: I1125 11:44:04.821217 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f078a954-b189-4c04-a72d-21f5d6d1b782-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2csd2\" (UID: \"f078a954-b189-4c04-a72d-21f5d6d1b782\") " pod="openshift-image-registry/image-registry-66df7c8f76-2csd2" Nov 25 11:44:04 crc kubenswrapper[4706]: I1125 11:44:04.831599 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f078a954-b189-4c04-a72d-21f5d6d1b782-bound-sa-token\") pod \"image-registry-66df7c8f76-2csd2\" (UID: \"f078a954-b189-4c04-a72d-21f5d6d1b782\") " pod="openshift-image-registry/image-registry-66df7c8f76-2csd2" Nov 25 11:44:04 crc kubenswrapper[4706]: I1125 11:44:04.838406 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgbpg\" (UniqueName: \"kubernetes.io/projected/f078a954-b189-4c04-a72d-21f5d6d1b782-kube-api-access-jgbpg\") pod \"image-registry-66df7c8f76-2csd2\" (UID: \"f078a954-b189-4c04-a72d-21f5d6d1b782\") " pod="openshift-image-registry/image-registry-66df7c8f76-2csd2" Nov 25 11:44:04 crc kubenswrapper[4706]: I1125 11:44:04.958842 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-2csd2" Nov 25 11:44:05 crc kubenswrapper[4706]: I1125 11:44:05.146984 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2csd2"] Nov 25 11:44:05 crc kubenswrapper[4706]: I1125 11:44:05.969602 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-2csd2" event={"ID":"f078a954-b189-4c04-a72d-21f5d6d1b782","Type":"ContainerStarted","Data":"9bac4dba7a638cf7ae79026cb07e393243f2b2232f1277d9b31cfbc317260101"} Nov 25 11:44:06 crc kubenswrapper[4706]: I1125 11:44:06.978372 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-2csd2" event={"ID":"f078a954-b189-4c04-a72d-21f5d6d1b782","Type":"ContainerStarted","Data":"205c999d7a5b3eb6e782020dc4e7f9c1b83399227a871f4b9a41cafe4ce0dfd6"} Nov 25 11:44:06 crc kubenswrapper[4706]: I1125 11:44:06.978917 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-2csd2" Nov 25 11:44:07 crc kubenswrapper[4706]: I1125 11:44:07.001757 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-2csd2" podStartSLOduration=3.001721688 podStartE2EDuration="3.001721688s" podCreationTimestamp="2025-11-25 11:44:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:44:06.997030031 +0000 UTC m=+455.911587412" watchObservedRunningTime="2025-11-25 11:44:07.001721688 +0000 UTC m=+455.916279109" Nov 25 11:44:24 crc kubenswrapper[4706]: I1125 11:44:24.964034 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-2csd2" Nov 25 11:44:25 crc kubenswrapper[4706]: I1125 11:44:25.014124 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7qf2c"] Nov 25 11:44:50 crc kubenswrapper[4706]: I1125 11:44:50.055276 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" podUID="f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66" containerName="registry" containerID="cri-o://e09b2f41097ce87e6433a7578157815b27efdb16c8ac3f81e5f1c2096f58d9bf" gracePeriod=30 Nov 25 11:44:50 crc kubenswrapper[4706]: I1125 11:44:50.228218 4706 generic.go:334] "Generic (PLEG): container finished" podID="f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66" containerID="e09b2f41097ce87e6433a7578157815b27efdb16c8ac3f81e5f1c2096f58d9bf" exitCode=0 Nov 25 11:44:50 crc kubenswrapper[4706]: I1125 11:44:50.228275 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" event={"ID":"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66","Type":"ContainerDied","Data":"e09b2f41097ce87e6433a7578157815b27efdb16c8ac3f81e5f1c2096f58d9bf"} Nov 25 11:44:50 crc kubenswrapper[4706]: I1125 11:44:50.383340 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:44:50 crc kubenswrapper[4706]: I1125 11:44:50.446073 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hfdx\" (UniqueName: \"kubernetes.io/projected/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-kube-api-access-5hfdx\") pod \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " Nov 25 11:44:50 crc kubenswrapper[4706]: I1125 11:44:50.446136 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-registry-tls\") pod \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " Nov 25 11:44:50 crc kubenswrapper[4706]: I1125 11:44:50.446167 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-registry-certificates\") pod \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " Nov 25 11:44:50 crc kubenswrapper[4706]: I1125 11:44:50.446426 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " Nov 25 11:44:50 crc kubenswrapper[4706]: I1125 11:44:50.446455 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-ca-trust-extracted\") pod \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " Nov 25 11:44:50 crc kubenswrapper[4706]: I1125 11:44:50.446482 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-trusted-ca\") pod \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " Nov 25 11:44:50 crc kubenswrapper[4706]: I1125 11:44:50.447973 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:44:50 crc kubenswrapper[4706]: I1125 11:44:50.448068 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:44:50 crc kubenswrapper[4706]: I1125 11:44:50.460000 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:44:50 crc kubenswrapper[4706]: I1125 11:44:50.460567 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-kube-api-access-5hfdx" (OuterVolumeSpecName: "kube-api-access-5hfdx") pod "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66"). InnerVolumeSpecName "kube-api-access-5hfdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:44:50 crc kubenswrapper[4706]: I1125 11:44:50.463829 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:44:50 crc kubenswrapper[4706]: I1125 11:44:50.466628 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 25 11:44:50 crc kubenswrapper[4706]: I1125 11:44:50.547958 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-installation-pull-secrets\") pod \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " Nov 25 11:44:50 crc kubenswrapper[4706]: I1125 11:44:50.548380 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-bound-sa-token\") pod \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\" (UID: \"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66\") " Nov 25 11:44:50 crc kubenswrapper[4706]: I1125 11:44:50.548530 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hfdx\" (UniqueName: \"kubernetes.io/projected/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-kube-api-access-5hfdx\") on node \"crc\" DevicePath \"\"" Nov 25 11:44:50 crc kubenswrapper[4706]: I1125 11:44:50.548543 4706 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 25 11:44:50 crc kubenswrapper[4706]: I1125 11:44:50.548555 4706 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 25 11:44:50 crc kubenswrapper[4706]: I1125 11:44:50.548564 4706 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 25 11:44:50 crc kubenswrapper[4706]: I1125 11:44:50.548572 4706 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 11:44:50 crc kubenswrapper[4706]: I1125 11:44:50.551812 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:44:50 crc kubenswrapper[4706]: I1125 11:44:50.552847 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66" (UID: "f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:44:50 crc kubenswrapper[4706]: I1125 11:44:50.649161 4706 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 25 11:44:50 crc kubenswrapper[4706]: I1125 11:44:50.649197 4706 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 11:44:51 crc kubenswrapper[4706]: I1125 11:44:51.236342 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" event={"ID":"f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66","Type":"ContainerDied","Data":"ec4d3097cdfc938345526a8823e6067012aba794681db1b4087ce1794e5886e4"} Nov 25 11:44:51 crc kubenswrapper[4706]: I1125 11:44:51.236426 4706 scope.go:117] "RemoveContainer" containerID="e09b2f41097ce87e6433a7578157815b27efdb16c8ac3f81e5f1c2096f58d9bf" Nov 25 11:44:51 crc kubenswrapper[4706]: I1125 11:44:51.236468 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7qf2c" Nov 25 11:44:51 crc kubenswrapper[4706]: I1125 11:44:51.275053 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7qf2c"] Nov 25 11:44:51 crc kubenswrapper[4706]: I1125 11:44:51.281232 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7qf2c"] Nov 25 11:44:51 crc kubenswrapper[4706]: I1125 11:44:51.931626 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66" path="/var/lib/kubelet/pods/f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66/volumes" Nov 25 11:45:00 crc kubenswrapper[4706]: I1125 11:45:00.136692 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401185-2mzsm"] Nov 25 11:45:00 crc kubenswrapper[4706]: E1125 11:45:00.137704 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66" containerName="registry" Nov 25 11:45:00 crc kubenswrapper[4706]: I1125 11:45:00.137724 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66" containerName="registry" Nov 25 11:45:00 crc kubenswrapper[4706]: I1125 11:45:00.137874 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3c75c9b-c79d-4a63-8b0f-c7b474ad4b66" containerName="registry" Nov 25 11:45:00 crc kubenswrapper[4706]: I1125 11:45:00.138403 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401185-2mzsm" Nov 25 11:45:00 crc kubenswrapper[4706]: I1125 11:45:00.141193 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 11:45:00 crc kubenswrapper[4706]: I1125 11:45:00.141408 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 11:45:00 crc kubenswrapper[4706]: I1125 11:45:00.146777 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401185-2mzsm"] Nov 25 11:45:00 crc kubenswrapper[4706]: I1125 11:45:00.271794 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44769f3f-2fd2-4cfa-8837-e723aabd08b4-secret-volume\") pod \"collect-profiles-29401185-2mzsm\" (UID: \"44769f3f-2fd2-4cfa-8837-e723aabd08b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401185-2mzsm" Nov 25 11:45:00 crc kubenswrapper[4706]: I1125 11:45:00.271844 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44769f3f-2fd2-4cfa-8837-e723aabd08b4-config-volume\") pod \"collect-profiles-29401185-2mzsm\" (UID: \"44769f3f-2fd2-4cfa-8837-e723aabd08b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401185-2mzsm" Nov 25 11:45:00 crc kubenswrapper[4706]: I1125 11:45:00.271883 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8gzc\" (UniqueName: \"kubernetes.io/projected/44769f3f-2fd2-4cfa-8837-e723aabd08b4-kube-api-access-k8gzc\") pod \"collect-profiles-29401185-2mzsm\" (UID: \"44769f3f-2fd2-4cfa-8837-e723aabd08b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401185-2mzsm" Nov 25 11:45:00 crc kubenswrapper[4706]: I1125 11:45:00.373472 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44769f3f-2fd2-4cfa-8837-e723aabd08b4-secret-volume\") pod \"collect-profiles-29401185-2mzsm\" (UID: \"44769f3f-2fd2-4cfa-8837-e723aabd08b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401185-2mzsm" Nov 25 11:45:00 crc kubenswrapper[4706]: I1125 11:45:00.373524 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44769f3f-2fd2-4cfa-8837-e723aabd08b4-config-volume\") pod \"collect-profiles-29401185-2mzsm\" (UID: \"44769f3f-2fd2-4cfa-8837-e723aabd08b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401185-2mzsm" Nov 25 11:45:00 crc kubenswrapper[4706]: I1125 11:45:00.373563 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8gzc\" (UniqueName: \"kubernetes.io/projected/44769f3f-2fd2-4cfa-8837-e723aabd08b4-kube-api-access-k8gzc\") pod \"collect-profiles-29401185-2mzsm\" (UID: \"44769f3f-2fd2-4cfa-8837-e723aabd08b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401185-2mzsm" Nov 25 11:45:00 crc kubenswrapper[4706]: I1125 11:45:00.375087 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44769f3f-2fd2-4cfa-8837-e723aabd08b4-config-volume\") pod \"collect-profiles-29401185-2mzsm\" (UID: \"44769f3f-2fd2-4cfa-8837-e723aabd08b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401185-2mzsm" Nov 25 11:45:00 crc kubenswrapper[4706]: I1125 11:45:00.381935 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44769f3f-2fd2-4cfa-8837-e723aabd08b4-secret-volume\") pod \"collect-profiles-29401185-2mzsm\" (UID: \"44769f3f-2fd2-4cfa-8837-e723aabd08b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401185-2mzsm" Nov 25 11:45:00 crc kubenswrapper[4706]: I1125 11:45:00.390695 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8gzc\" (UniqueName: \"kubernetes.io/projected/44769f3f-2fd2-4cfa-8837-e723aabd08b4-kube-api-access-k8gzc\") pod \"collect-profiles-29401185-2mzsm\" (UID: \"44769f3f-2fd2-4cfa-8837-e723aabd08b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401185-2mzsm" Nov 25 11:45:00 crc kubenswrapper[4706]: I1125 11:45:00.467531 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401185-2mzsm" Nov 25 11:45:00 crc kubenswrapper[4706]: I1125 11:45:00.890412 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401185-2mzsm"] Nov 25 11:45:01 crc kubenswrapper[4706]: I1125 11:45:01.298766 4706 generic.go:334] "Generic (PLEG): container finished" podID="44769f3f-2fd2-4cfa-8837-e723aabd08b4" containerID="05f50853f28e786210d1b81136d591816b6ac6d1ac0d687a23933c18ce35e154" exitCode=0 Nov 25 11:45:01 crc kubenswrapper[4706]: I1125 11:45:01.298901 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401185-2mzsm" event={"ID":"44769f3f-2fd2-4cfa-8837-e723aabd08b4","Type":"ContainerDied","Data":"05f50853f28e786210d1b81136d591816b6ac6d1ac0d687a23933c18ce35e154"} Nov 25 11:45:01 crc kubenswrapper[4706]: I1125 11:45:01.300130 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401185-2mzsm" event={"ID":"44769f3f-2fd2-4cfa-8837-e723aabd08b4","Type":"ContainerStarted","Data":"2693f54429ca9566d745e145b175b6cd94faaedd32a99147c96b7ccc2aa9d088"} Nov 25 11:45:02 crc kubenswrapper[4706]: I1125 11:45:02.527365 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401185-2mzsm" Nov 25 11:45:02 crc kubenswrapper[4706]: I1125 11:45:02.705887 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44769f3f-2fd2-4cfa-8837-e723aabd08b4-config-volume\") pod \"44769f3f-2fd2-4cfa-8837-e723aabd08b4\" (UID: \"44769f3f-2fd2-4cfa-8837-e723aabd08b4\") " Nov 25 11:45:02 crc kubenswrapper[4706]: I1125 11:45:02.705956 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44769f3f-2fd2-4cfa-8837-e723aabd08b4-secret-volume\") pod \"44769f3f-2fd2-4cfa-8837-e723aabd08b4\" (UID: \"44769f3f-2fd2-4cfa-8837-e723aabd08b4\") " Nov 25 11:45:02 crc kubenswrapper[4706]: I1125 11:45:02.705982 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8gzc\" (UniqueName: \"kubernetes.io/projected/44769f3f-2fd2-4cfa-8837-e723aabd08b4-kube-api-access-k8gzc\") pod \"44769f3f-2fd2-4cfa-8837-e723aabd08b4\" (UID: \"44769f3f-2fd2-4cfa-8837-e723aabd08b4\") " Nov 25 11:45:02 crc kubenswrapper[4706]: I1125 11:45:02.706849 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44769f3f-2fd2-4cfa-8837-e723aabd08b4-config-volume" (OuterVolumeSpecName: "config-volume") pod "44769f3f-2fd2-4cfa-8837-e723aabd08b4" (UID: "44769f3f-2fd2-4cfa-8837-e723aabd08b4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:45:02 crc kubenswrapper[4706]: I1125 11:45:02.712034 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44769f3f-2fd2-4cfa-8837-e723aabd08b4-kube-api-access-k8gzc" (OuterVolumeSpecName: "kube-api-access-k8gzc") pod "44769f3f-2fd2-4cfa-8837-e723aabd08b4" (UID: "44769f3f-2fd2-4cfa-8837-e723aabd08b4"). InnerVolumeSpecName "kube-api-access-k8gzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:45:02 crc kubenswrapper[4706]: I1125 11:45:02.712483 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44769f3f-2fd2-4cfa-8837-e723aabd08b4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "44769f3f-2fd2-4cfa-8837-e723aabd08b4" (UID: "44769f3f-2fd2-4cfa-8837-e723aabd08b4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:45:02 crc kubenswrapper[4706]: I1125 11:45:02.813758 4706 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44769f3f-2fd2-4cfa-8837-e723aabd08b4-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 11:45:02 crc kubenswrapper[4706]: I1125 11:45:02.813810 4706 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44769f3f-2fd2-4cfa-8837-e723aabd08b4-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 11:45:02 crc kubenswrapper[4706]: I1125 11:45:02.813823 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8gzc\" (UniqueName: \"kubernetes.io/projected/44769f3f-2fd2-4cfa-8837-e723aabd08b4-kube-api-access-k8gzc\") on node \"crc\" DevicePath \"\"" Nov 25 11:45:03 crc kubenswrapper[4706]: I1125 11:45:03.312330 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401185-2mzsm" event={"ID":"44769f3f-2fd2-4cfa-8837-e723aabd08b4","Type":"ContainerDied","Data":"2693f54429ca9566d745e145b175b6cd94faaedd32a99147c96b7ccc2aa9d088"} Nov 25 11:45:03 crc kubenswrapper[4706]: I1125 11:45:03.312380 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2693f54429ca9566d745e145b175b6cd94faaedd32a99147c96b7ccc2aa9d088" Nov 25 11:45:03 crc kubenswrapper[4706]: I1125 11:45:03.312438 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401185-2mzsm" Nov 25 11:45:31 crc kubenswrapper[4706]: I1125 11:45:31.125614 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 11:45:31 crc kubenswrapper[4706]: I1125 11:45:31.126253 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 11:46:01 crc kubenswrapper[4706]: I1125 11:46:01.125158 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 11:46:01 crc kubenswrapper[4706]: I1125 11:46:01.125894 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 11:46:31 crc kubenswrapper[4706]: I1125 11:46:31.124761 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 11:46:31 crc kubenswrapper[4706]: I1125 11:46:31.125562 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 11:46:31 crc kubenswrapper[4706]: I1125 11:46:31.125634 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" Nov 25 11:46:31 crc kubenswrapper[4706]: I1125 11:46:31.126568 4706 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0dd63e85870564c9c1e19ba8f686c8d7b197f9c962efb9def7912bf046e425dd"} pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 11:46:31 crc kubenswrapper[4706]: I1125 11:46:31.126671 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" containerID="cri-o://0dd63e85870564c9c1e19ba8f686c8d7b197f9c962efb9def7912bf046e425dd" gracePeriod=600 Nov 25 11:46:31 crc kubenswrapper[4706]: I1125 11:46:31.825052 4706 generic.go:334] "Generic (PLEG): container finished" podID="0930887a-320c-4506-8c9c-f94d6d64516a" containerID="0dd63e85870564c9c1e19ba8f686c8d7b197f9c962efb9def7912bf046e425dd" exitCode=0 Nov 25 11:46:31 crc kubenswrapper[4706]: I1125 11:46:31.825136 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" event={"ID":"0930887a-320c-4506-8c9c-f94d6d64516a","Type":"ContainerDied","Data":"0dd63e85870564c9c1e19ba8f686c8d7b197f9c962efb9def7912bf046e425dd"} Nov 25 11:46:31 crc kubenswrapper[4706]: I1125 11:46:31.825782 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" event={"ID":"0930887a-320c-4506-8c9c-f94d6d64516a","Type":"ContainerStarted","Data":"683756e714349294998bf9e4fc9b79c9b932ba51c675e9492a76d30885edc873"} Nov 25 11:46:31 crc kubenswrapper[4706]: I1125 11:46:31.825811 4706 scope.go:117] "RemoveContainer" containerID="c43009691a1ca998131689b9f478affb1596618b922c6332af076407a2828da9" Nov 25 11:48:01 crc kubenswrapper[4706]: I1125 11:48:01.522891 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-8qfjm"] Nov 25 11:48:01 crc kubenswrapper[4706]: E1125 11:48:01.523771 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44769f3f-2fd2-4cfa-8837-e723aabd08b4" containerName="collect-profiles" Nov 25 11:48:01 crc kubenswrapper[4706]: I1125 11:48:01.523784 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="44769f3f-2fd2-4cfa-8837-e723aabd08b4" containerName="collect-profiles" Nov 25 11:48:01 crc kubenswrapper[4706]: I1125 11:48:01.523885 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="44769f3f-2fd2-4cfa-8837-e723aabd08b4" containerName="collect-profiles" Nov 25 11:48:01 crc kubenswrapper[4706]: I1125 11:48:01.524364 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-8qfjm" Nov 25 11:48:01 crc kubenswrapper[4706]: I1125 11:48:01.526787 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 25 11:48:01 crc kubenswrapper[4706]: I1125 11:48:01.527637 4706 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-8v9gh" Nov 25 11:48:01 crc kubenswrapper[4706]: I1125 11:48:01.529883 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-qv4vk"] Nov 25 11:48:01 crc kubenswrapper[4706]: I1125 11:48:01.531408 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-qv4vk" Nov 25 11:48:01 crc kubenswrapper[4706]: I1125 11:48:01.530087 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 25 11:48:01 crc kubenswrapper[4706]: I1125 11:48:01.533152 4706 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-n25zr" Nov 25 11:48:01 crc kubenswrapper[4706]: I1125 11:48:01.541245 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-8qfjm"] Nov 25 11:48:01 crc kubenswrapper[4706]: I1125 11:48:01.546262 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-bk58z"] Nov 25 11:48:01 crc kubenswrapper[4706]: I1125 11:48:01.547331 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-bk58z" Nov 25 11:48:01 crc kubenswrapper[4706]: I1125 11:48:01.549418 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-qv4vk"] Nov 25 11:48:01 crc kubenswrapper[4706]: I1125 11:48:01.566762 4706 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-79wv8" Nov 25 11:48:01 crc kubenswrapper[4706]: I1125 11:48:01.566898 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-bk58z"] Nov 25 11:48:01 crc kubenswrapper[4706]: I1125 11:48:01.698922 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6wlk\" (UniqueName: \"kubernetes.io/projected/96496646-6a16-483a-a71d-c6debd0e44d7-kube-api-access-w6wlk\") pod \"cert-manager-cainjector-7f985d654d-8qfjm\" (UID: \"96496646-6a16-483a-a71d-c6debd0e44d7\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-8qfjm" Nov 25 11:48:01 crc kubenswrapper[4706]: I1125 11:48:01.699191 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h77n2\" (UniqueName: \"kubernetes.io/projected/3a171d39-2023-41e0-b928-710c5b9eff19-kube-api-access-h77n2\") pod \"cert-manager-webhook-5655c58dd6-bk58z\" (UID: \"3a171d39-2023-41e0-b928-710c5b9eff19\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-bk58z" Nov 25 11:48:01 crc kubenswrapper[4706]: I1125 11:48:01.699284 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzhk6\" (UniqueName: \"kubernetes.io/projected/a9733b54-d1c6-48b7-9e7f-4c09ed97b604-kube-api-access-vzhk6\") pod \"cert-manager-5b446d88c5-qv4vk\" (UID: \"a9733b54-d1c6-48b7-9e7f-4c09ed97b604\") " pod="cert-manager/cert-manager-5b446d88c5-qv4vk" Nov 25 11:48:01 crc kubenswrapper[4706]: I1125 11:48:01.800671 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6wlk\" (UniqueName: \"kubernetes.io/projected/96496646-6a16-483a-a71d-c6debd0e44d7-kube-api-access-w6wlk\") pod \"cert-manager-cainjector-7f985d654d-8qfjm\" (UID: \"96496646-6a16-483a-a71d-c6debd0e44d7\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-8qfjm" Nov 25 11:48:01 crc kubenswrapper[4706]: I1125 11:48:01.800989 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h77n2\" (UniqueName: \"kubernetes.io/projected/3a171d39-2023-41e0-b928-710c5b9eff19-kube-api-access-h77n2\") pod \"cert-manager-webhook-5655c58dd6-bk58z\" (UID: \"3a171d39-2023-41e0-b928-710c5b9eff19\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-bk58z" Nov 25 11:48:01 crc kubenswrapper[4706]: I1125 11:48:01.801117 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzhk6\" (UniqueName: \"kubernetes.io/projected/a9733b54-d1c6-48b7-9e7f-4c09ed97b604-kube-api-access-vzhk6\") pod \"cert-manager-5b446d88c5-qv4vk\" (UID: \"a9733b54-d1c6-48b7-9e7f-4c09ed97b604\") " pod="cert-manager/cert-manager-5b446d88c5-qv4vk" Nov 25 11:48:01 crc kubenswrapper[4706]: I1125 11:48:01.818489 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzhk6\" (UniqueName: \"kubernetes.io/projected/a9733b54-d1c6-48b7-9e7f-4c09ed97b604-kube-api-access-vzhk6\") pod \"cert-manager-5b446d88c5-qv4vk\" (UID: \"a9733b54-d1c6-48b7-9e7f-4c09ed97b604\") " pod="cert-manager/cert-manager-5b446d88c5-qv4vk" Nov 25 11:48:01 crc kubenswrapper[4706]: I1125 11:48:01.822702 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h77n2\" (UniqueName: \"kubernetes.io/projected/3a171d39-2023-41e0-b928-710c5b9eff19-kube-api-access-h77n2\") pod \"cert-manager-webhook-5655c58dd6-bk58z\" (UID: \"3a171d39-2023-41e0-b928-710c5b9eff19\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-bk58z" Nov 25 11:48:01 crc kubenswrapper[4706]: I1125 11:48:01.824210 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6wlk\" (UniqueName: \"kubernetes.io/projected/96496646-6a16-483a-a71d-c6debd0e44d7-kube-api-access-w6wlk\") pod \"cert-manager-cainjector-7f985d654d-8qfjm\" (UID: \"96496646-6a16-483a-a71d-c6debd0e44d7\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-8qfjm" Nov 25 11:48:01 crc kubenswrapper[4706]: I1125 11:48:01.869748 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-qv4vk" Nov 25 11:48:01 crc kubenswrapper[4706]: I1125 11:48:01.869837 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-8qfjm" Nov 25 11:48:01 crc kubenswrapper[4706]: I1125 11:48:01.872284 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-bk58z" Nov 25 11:48:02 crc kubenswrapper[4706]: I1125 11:48:02.096815 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-bk58z"] Nov 25 11:48:02 crc kubenswrapper[4706]: I1125 11:48:02.111286 4706 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 11:48:02 crc kubenswrapper[4706]: I1125 11:48:02.344061 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-bk58z" event={"ID":"3a171d39-2023-41e0-b928-710c5b9eff19","Type":"ContainerStarted","Data":"c65c9b06602a074755cd407445c049ef7055ebbdc33012d1c73c56d14167fd06"} Nov 25 11:48:02 crc kubenswrapper[4706]: I1125 11:48:02.345241 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-qv4vk"] Nov 25 11:48:02 crc kubenswrapper[4706]: I1125 11:48:02.355259 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-8qfjm"] Nov 25 11:48:02 crc kubenswrapper[4706]: W1125 11:48:02.357091 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9733b54_d1c6_48b7_9e7f_4c09ed97b604.slice/crio-5c20c79f7bf029aeb1631b959464446d2f722473398a932e06e6693f08e2373c WatchSource:0}: Error finding container 5c20c79f7bf029aeb1631b959464446d2f722473398a932e06e6693f08e2373c: Status 404 returned error can't find the container with id 5c20c79f7bf029aeb1631b959464446d2f722473398a932e06e6693f08e2373c Nov 25 11:48:02 crc kubenswrapper[4706]: W1125 11:48:02.361372 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96496646_6a16_483a_a71d_c6debd0e44d7.slice/crio-ce9e296c270101ec45b033ea839156d138adcbf0f28ae57a4030642c720d0338 WatchSource:0}: Error finding container ce9e296c270101ec45b033ea839156d138adcbf0f28ae57a4030642c720d0338: Status 404 returned error can't find the container with id ce9e296c270101ec45b033ea839156d138adcbf0f28ae57a4030642c720d0338 Nov 25 11:48:03 crc kubenswrapper[4706]: I1125 11:48:03.357271 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-qv4vk" event={"ID":"a9733b54-d1c6-48b7-9e7f-4c09ed97b604","Type":"ContainerStarted","Data":"5c20c79f7bf029aeb1631b959464446d2f722473398a932e06e6693f08e2373c"} Nov 25 11:48:03 crc kubenswrapper[4706]: I1125 11:48:03.358545 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-8qfjm" event={"ID":"96496646-6a16-483a-a71d-c6debd0e44d7","Type":"ContainerStarted","Data":"ce9e296c270101ec45b033ea839156d138adcbf0f28ae57a4030642c720d0338"} Nov 25 11:48:07 crc kubenswrapper[4706]: I1125 11:48:07.381064 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-qv4vk" event={"ID":"a9733b54-d1c6-48b7-9e7f-4c09ed97b604","Type":"ContainerStarted","Data":"cbdf225c40a3d27e6692741343a64bcf15dae7cdcf9cb2782d7402f86bb157d1"} Nov 25 11:48:07 crc kubenswrapper[4706]: I1125 11:48:07.383203 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-8qfjm" event={"ID":"96496646-6a16-483a-a71d-c6debd0e44d7","Type":"ContainerStarted","Data":"568ca306273f03e025745073408084c5a299477454e8f0ba138144d6768bf8cc"} Nov 25 11:48:07 crc kubenswrapper[4706]: I1125 11:48:07.387352 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-bk58z" event={"ID":"3a171d39-2023-41e0-b928-710c5b9eff19","Type":"ContainerStarted","Data":"ac4f99dee105b80dcba23c4cb005170cd3f5cfcbc99c0f08e2ea3d8cf0391809"} Nov 25 11:48:07 crc kubenswrapper[4706]: I1125 11:48:07.387611 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-bk58z" Nov 25 11:48:07 crc kubenswrapper[4706]: I1125 11:48:07.398248 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-qv4vk" podStartSLOduration=2.5511696710000002 podStartE2EDuration="6.398227928s" podCreationTimestamp="2025-11-25 11:48:01 +0000 UTC" firstStartedPulling="2025-11-25 11:48:02.360498331 +0000 UTC m=+691.275055702" lastFinishedPulling="2025-11-25 11:48:06.207556578 +0000 UTC m=+695.122113959" observedRunningTime="2025-11-25 11:48:07.395775166 +0000 UTC m=+696.310332547" watchObservedRunningTime="2025-11-25 11:48:07.398227928 +0000 UTC m=+696.312785309" Nov 25 11:48:07 crc kubenswrapper[4706]: I1125 11:48:07.460530 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-8qfjm" podStartSLOduration=2.618089022 podStartE2EDuration="6.460506031s" podCreationTimestamp="2025-11-25 11:48:01 +0000 UTC" firstStartedPulling="2025-11-25 11:48:02.364525023 +0000 UTC m=+691.279082404" lastFinishedPulling="2025-11-25 11:48:06.206942032 +0000 UTC m=+695.121499413" observedRunningTime="2025-11-25 11:48:07.456604973 +0000 UTC m=+696.371162374" watchObservedRunningTime="2025-11-25 11:48:07.460506031 +0000 UTC m=+696.375063412" Nov 25 11:48:07 crc kubenswrapper[4706]: I1125 11:48:07.477968 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-bk58z" podStartSLOduration=2.318332481 podStartE2EDuration="6.477941631s" podCreationTimestamp="2025-11-25 11:48:01 +0000 UTC" firstStartedPulling="2025-11-25 11:48:02.11100417 +0000 UTC m=+691.025561551" lastFinishedPulling="2025-11-25 11:48:06.27061332 +0000 UTC m=+695.185170701" observedRunningTime="2025-11-25 11:48:07.476570267 +0000 UTC m=+696.391127638" watchObservedRunningTime="2025-11-25 11:48:07.477941631 +0000 UTC m=+696.392499012" Nov 25 11:48:11 crc kubenswrapper[4706]: I1125 11:48:11.436821 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-q9rpr"] Nov 25 11:48:11 crc kubenswrapper[4706]: I1125 11:48:11.437851 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="ovn-controller" containerID="cri-o://96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96" gracePeriod=30 Nov 25 11:48:11 crc kubenswrapper[4706]: I1125 11:48:11.438498 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="sbdb" containerID="cri-o://62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188" gracePeriod=30 Nov 25 11:48:11 crc kubenswrapper[4706]: I1125 11:48:11.438560 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="nbdb" containerID="cri-o://ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe" gracePeriod=30 Nov 25 11:48:11 crc kubenswrapper[4706]: I1125 11:48:11.438624 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="northd" containerID="cri-o://86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48" gracePeriod=30 Nov 25 11:48:11 crc kubenswrapper[4706]: I1125 11:48:11.438676 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7" gracePeriod=30 Nov 25 11:48:11 crc kubenswrapper[4706]: I1125 11:48:11.438728 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="kube-rbac-proxy-node" containerID="cri-o://da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e" gracePeriod=30 Nov 25 11:48:11 crc kubenswrapper[4706]: I1125 11:48:11.438776 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="ovn-acl-logging" containerID="cri-o://f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0" gracePeriod=30 Nov 25 11:48:11 crc kubenswrapper[4706]: I1125 11:48:11.477217 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="ovnkube-controller" containerID="cri-o://1d86458011d93f6fe7285fb2f2cf484e62c79cf7a6171f9223b43b6413689879" gracePeriod=30 Nov 25 11:48:11 crc kubenswrapper[4706]: I1125 11:48:11.875744 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-bk58z" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.184061 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9rpr_f1218bae-4153-4490-8847-ab2d07ca0ab6/ovnkube-controller/3.log" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.187933 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9rpr_f1218bae-4153-4490-8847-ab2d07ca0ab6/ovn-acl-logging/0.log" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.188913 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9rpr_f1218bae-4153-4490-8847-ab2d07ca0ab6/ovn-controller/0.log" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.189572 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.248569 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jpm5s"] Nov 25 11:48:12 crc kubenswrapper[4706]: E1125 11:48:12.248828 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="ovn-acl-logging" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.248844 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="ovn-acl-logging" Nov 25 11:48:12 crc kubenswrapper[4706]: E1125 11:48:12.248853 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="ovnkube-controller" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.248862 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="ovnkube-controller" Nov 25 11:48:12 crc kubenswrapper[4706]: E1125 11:48:12.248872 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="northd" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.248879 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="northd" Nov 25 11:48:12 crc kubenswrapper[4706]: E1125 11:48:12.248887 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="kube-rbac-proxy-ovn-metrics" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.248893 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="kube-rbac-proxy-ovn-metrics" Nov 25 11:48:12 crc kubenswrapper[4706]: E1125 11:48:12.248903 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="ovnkube-controller" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.248909 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="ovnkube-controller" Nov 25 11:48:12 crc kubenswrapper[4706]: E1125 11:48:12.248917 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="ovnkube-controller" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.248922 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="ovnkube-controller" Nov 25 11:48:12 crc kubenswrapper[4706]: E1125 11:48:12.248931 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="kubecfg-setup" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.248937 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="kubecfg-setup" Nov 25 11:48:12 crc kubenswrapper[4706]: E1125 11:48:12.248946 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="nbdb" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.248952 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="nbdb" Nov 25 11:48:12 crc kubenswrapper[4706]: E1125 11:48:12.248961 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="kube-rbac-proxy-node" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.248966 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="kube-rbac-proxy-node" Nov 25 11:48:12 crc kubenswrapper[4706]: E1125 11:48:12.248975 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="sbdb" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.248981 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="sbdb" Nov 25 11:48:12 crc kubenswrapper[4706]: E1125 11:48:12.248991 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="ovnkube-controller" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.248996 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="ovnkube-controller" Nov 25 11:48:12 crc kubenswrapper[4706]: E1125 11:48:12.249003 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="ovnkube-controller" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.249009 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="ovnkube-controller" Nov 25 11:48:12 crc kubenswrapper[4706]: E1125 11:48:12.249023 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="ovn-controller" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.249028 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="ovn-controller" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.249128 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="sbdb" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.249140 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="northd" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.249150 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="ovn-acl-logging" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.249159 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="ovnkube-controller" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.249165 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="kube-rbac-proxy-ovn-metrics" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.249175 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="ovnkube-controller" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.249345 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="ovnkube-controller" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.249356 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="ovnkube-controller" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.249365 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="kube-rbac-proxy-node" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.249371 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="ovn-controller" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.249381 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="nbdb" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.249550 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerName="ovnkube-controller" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.251092 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.366788 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f1218bae-4153-4490-8847-ab2d07ca0ab6-env-overrides\") pod \"f1218bae-4153-4490-8847-ab2d07ca0ab6\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.367233 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"f1218bae-4153-4490-8847-ab2d07ca0ab6\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.367330 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-etc-openvswitch\") pod \"f1218bae-4153-4490-8847-ab2d07ca0ab6\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.367403 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-var-lib-openvswitch\") pod \"f1218bae-4153-4490-8847-ab2d07ca0ab6\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.367436 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-systemd-units\") pod \"f1218bae-4153-4490-8847-ab2d07ca0ab6\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.367436 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "f1218bae-4153-4490-8847-ab2d07ca0ab6" (UID: "f1218bae-4153-4490-8847-ab2d07ca0ab6"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.367465 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-node-log\") pod \"f1218bae-4153-4490-8847-ab2d07ca0ab6\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.367549 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-node-log" (OuterVolumeSpecName: "node-log") pod "f1218bae-4153-4490-8847-ab2d07ca0ab6" (UID: "f1218bae-4153-4490-8847-ab2d07ca0ab6"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.367550 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "f1218bae-4153-4490-8847-ab2d07ca0ab6" (UID: "f1218bae-4153-4490-8847-ab2d07ca0ab6"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.367575 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "f1218bae-4153-4490-8847-ab2d07ca0ab6" (UID: "f1218bae-4153-4490-8847-ab2d07ca0ab6"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.367589 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f1218bae-4153-4490-8847-ab2d07ca0ab6-ovnkube-script-lib\") pod \"f1218bae-4153-4490-8847-ab2d07ca0ab6\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.367619 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "f1218bae-4153-4490-8847-ab2d07ca0ab6" (UID: "f1218bae-4153-4490-8847-ab2d07ca0ab6"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.367675 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-run-ovn-kubernetes\") pod \"f1218bae-4153-4490-8847-ab2d07ca0ab6\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.367727 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-run-systemd\") pod \"f1218bae-4153-4490-8847-ab2d07ca0ab6\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.367774 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "f1218bae-4153-4490-8847-ab2d07ca0ab6" (UID: "f1218bae-4153-4490-8847-ab2d07ca0ab6"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.367789 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b55sf\" (UniqueName: \"kubernetes.io/projected/f1218bae-4153-4490-8847-ab2d07ca0ab6-kube-api-access-b55sf\") pod \"f1218bae-4153-4490-8847-ab2d07ca0ab6\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.367821 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f1218bae-4153-4490-8847-ab2d07ca0ab6-ovnkube-config\") pod \"f1218bae-4153-4490-8847-ab2d07ca0ab6\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.367854 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-cni-bin\") pod \"f1218bae-4153-4490-8847-ab2d07ca0ab6\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.367877 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-run-openvswitch\") pod \"f1218bae-4153-4490-8847-ab2d07ca0ab6\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.367907 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-kubelet\") pod \"f1218bae-4153-4490-8847-ab2d07ca0ab6\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.367939 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-run-netns\") pod \"f1218bae-4153-4490-8847-ab2d07ca0ab6\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.367954 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "f1218bae-4153-4490-8847-ab2d07ca0ab6" (UID: "f1218bae-4153-4490-8847-ab2d07ca0ab6"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.367983 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "f1218bae-4153-4490-8847-ab2d07ca0ab6" (UID: "f1218bae-4153-4490-8847-ab2d07ca0ab6"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.367995 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "f1218bae-4153-4490-8847-ab2d07ca0ab6" (UID: "f1218bae-4153-4490-8847-ab2d07ca0ab6"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.368019 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "f1218bae-4153-4490-8847-ab2d07ca0ab6" (UID: "f1218bae-4153-4490-8847-ab2d07ca0ab6"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.368047 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f1218bae-4153-4490-8847-ab2d07ca0ab6-ovn-node-metrics-cert\") pod \"f1218bae-4153-4490-8847-ab2d07ca0ab6\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.368137 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-cni-netd\") pod \"f1218bae-4153-4490-8847-ab2d07ca0ab6\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.368165 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "f1218bae-4153-4490-8847-ab2d07ca0ab6" (UID: "f1218bae-4153-4490-8847-ab2d07ca0ab6"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.368415 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-log-socket\") pod \"f1218bae-4153-4490-8847-ab2d07ca0ab6\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.368458 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-log-socket" (OuterVolumeSpecName: "log-socket") pod "f1218bae-4153-4490-8847-ab2d07ca0ab6" (UID: "f1218bae-4153-4490-8847-ab2d07ca0ab6"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.368595 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-run-ovn\") pod \"f1218bae-4153-4490-8847-ab2d07ca0ab6\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.368617 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1218bae-4153-4490-8847-ab2d07ca0ab6-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "f1218bae-4153-4490-8847-ab2d07ca0ab6" (UID: "f1218bae-4153-4490-8847-ab2d07ca0ab6"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.368644 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-slash\") pod \"f1218bae-4153-4490-8847-ab2d07ca0ab6\" (UID: \"f1218bae-4153-4490-8847-ab2d07ca0ab6\") " Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.368677 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "f1218bae-4153-4490-8847-ab2d07ca0ab6" (UID: "f1218bae-4153-4490-8847-ab2d07ca0ab6"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.368781 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-slash" (OuterVolumeSpecName: "host-slash") pod "f1218bae-4153-4490-8847-ab2d07ca0ab6" (UID: "f1218bae-4153-4490-8847-ab2d07ca0ab6"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.368879 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-ovn-node-metrics-cert\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.368937 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-node-log\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.368884 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1218bae-4153-4490-8847-ab2d07ca0ab6-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "f1218bae-4153-4490-8847-ab2d07ca0ab6" (UID: "f1218bae-4153-4490-8847-ab2d07ca0ab6"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.368981 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.369088 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-host-slash\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.369129 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1218bae-4153-4490-8847-ab2d07ca0ab6-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "f1218bae-4153-4490-8847-ab2d07ca0ab6" (UID: "f1218bae-4153-4490-8847-ab2d07ca0ab6"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.369150 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-ovnkube-script-lib\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.369296 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npp8w\" (UniqueName: \"kubernetes.io/projected/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-kube-api-access-npp8w\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.369390 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-host-cni-netd\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.369425 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-log-socket\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.369451 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-etc-openvswitch\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.369473 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-host-cni-bin\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.369516 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-run-ovn\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.369546 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-env-overrides\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.369575 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-var-lib-openvswitch\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.369590 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-ovnkube-config\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.369619 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-host-run-netns\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.369634 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-run-openvswitch\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.369649 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-run-systemd\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.369673 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-host-run-ovn-kubernetes\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.369699 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-host-kubelet\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.369722 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-systemd-units\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.369799 4706 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.369811 4706 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-node-log\") on node \"crc\" DevicePath \"\"" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.369820 4706 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.369828 4706 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f1218bae-4153-4490-8847-ab2d07ca0ab6-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.369838 4706 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.369850 4706 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f1218bae-4153-4490-8847-ab2d07ca0ab6-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.369860 4706 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.369868 4706 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.369876 4706 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.369884 4706 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.369892 4706 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.369901 4706 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-log-socket\") on node \"crc\" DevicePath \"\"" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.369910 4706 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.369926 4706 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-slash\") on node \"crc\" DevicePath \"\"" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.369942 4706 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f1218bae-4153-4490-8847-ab2d07ca0ab6-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.369955 4706 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.369968 4706 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.373838 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1218bae-4153-4490-8847-ab2d07ca0ab6-kube-api-access-b55sf" (OuterVolumeSpecName: "kube-api-access-b55sf") pod "f1218bae-4153-4490-8847-ab2d07ca0ab6" (UID: "f1218bae-4153-4490-8847-ab2d07ca0ab6"). InnerVolumeSpecName "kube-api-access-b55sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.374331 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1218bae-4153-4490-8847-ab2d07ca0ab6-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "f1218bae-4153-4490-8847-ab2d07ca0ab6" (UID: "f1218bae-4153-4490-8847-ab2d07ca0ab6"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.383770 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "f1218bae-4153-4490-8847-ab2d07ca0ab6" (UID: "f1218bae-4153-4490-8847-ab2d07ca0ab6"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.433387 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s47nr_9912058e-28f5-4cec-9eeb-03e37e0dc5c1/kube-multus/2.log" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.434029 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s47nr_9912058e-28f5-4cec-9eeb-03e37e0dc5c1/kube-multus/1.log" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.434093 4706 generic.go:334] "Generic (PLEG): container finished" podID="9912058e-28f5-4cec-9eeb-03e37e0dc5c1" containerID="198cfd82640633cc783bf590d5743bed75f93473c1ccd934ea506aef32ea6201" exitCode=2 Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.434213 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-s47nr" event={"ID":"9912058e-28f5-4cec-9eeb-03e37e0dc5c1","Type":"ContainerDied","Data":"198cfd82640633cc783bf590d5743bed75f93473c1ccd934ea506aef32ea6201"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.434349 4706 scope.go:117] "RemoveContainer" containerID="8831e77983548cfffd56f81ff9f25b90d70dfb71b47b545af370b0a813fa19a9" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.434965 4706 scope.go:117] "RemoveContainer" containerID="198cfd82640633cc783bf590d5743bed75f93473c1ccd934ea506aef32ea6201" Nov 25 11:48:12 crc kubenswrapper[4706]: E1125 11:48:12.435176 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-s47nr_openshift-multus(9912058e-28f5-4cec-9eeb-03e37e0dc5c1)\"" pod="openshift-multus/multus-s47nr" podUID="9912058e-28f5-4cec-9eeb-03e37e0dc5c1" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.438997 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9rpr_f1218bae-4153-4490-8847-ab2d07ca0ab6/ovnkube-controller/3.log" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.449595 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9rpr_f1218bae-4153-4490-8847-ab2d07ca0ab6/ovn-acl-logging/0.log" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.450224 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9rpr_f1218bae-4153-4490-8847-ab2d07ca0ab6/ovn-controller/0.log" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.450741 4706 generic.go:334] "Generic (PLEG): container finished" podID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerID="1d86458011d93f6fe7285fb2f2cf484e62c79cf7a6171f9223b43b6413689879" exitCode=0 Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.450771 4706 generic.go:334] "Generic (PLEG): container finished" podID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerID="62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188" exitCode=0 Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.450782 4706 generic.go:334] "Generic (PLEG): container finished" podID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerID="ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe" exitCode=0 Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.450791 4706 generic.go:334] "Generic (PLEG): container finished" podID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerID="86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48" exitCode=0 Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.450804 4706 generic.go:334] "Generic (PLEG): container finished" podID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerID="e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7" exitCode=0 Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.450814 4706 generic.go:334] "Generic (PLEG): container finished" podID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerID="da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e" exitCode=0 Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.450822 4706 generic.go:334] "Generic (PLEG): container finished" podID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerID="f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0" exitCode=143 Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.450829 4706 generic.go:334] "Generic (PLEG): container finished" podID="f1218bae-4153-4490-8847-ab2d07ca0ab6" containerID="96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96" exitCode=143 Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.450816 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" event={"ID":"f1218bae-4153-4490-8847-ab2d07ca0ab6","Type":"ContainerDied","Data":"1d86458011d93f6fe7285fb2f2cf484e62c79cf7a6171f9223b43b6413689879"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.450873 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" event={"ID":"f1218bae-4153-4490-8847-ab2d07ca0ab6","Type":"ContainerDied","Data":"62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.450890 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" event={"ID":"f1218bae-4153-4490-8847-ab2d07ca0ab6","Type":"ContainerDied","Data":"ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.450902 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" event={"ID":"f1218bae-4153-4490-8847-ab2d07ca0ab6","Type":"ContainerDied","Data":"86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.450916 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" event={"ID":"f1218bae-4153-4490-8847-ab2d07ca0ab6","Type":"ContainerDied","Data":"e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.450925 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" event={"ID":"f1218bae-4153-4490-8847-ab2d07ca0ab6","Type":"ContainerDied","Data":"da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.450937 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d86458011d93f6fe7285fb2f2cf484e62c79cf7a6171f9223b43b6413689879"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.450950 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.450872 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.450956 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451519 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451530 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451537 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451543 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451548 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451553 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451558 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451568 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" event={"ID":"f1218bae-4153-4490-8847-ab2d07ca0ab6","Type":"ContainerDied","Data":"f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451583 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d86458011d93f6fe7285fb2f2cf484e62c79cf7a6171f9223b43b6413689879"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451591 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451597 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451604 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451610 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451615 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451621 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451626 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451631 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451636 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451643 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" event={"ID":"f1218bae-4153-4490-8847-ab2d07ca0ab6","Type":"ContainerDied","Data":"96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451652 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d86458011d93f6fe7285fb2f2cf484e62c79cf7a6171f9223b43b6413689879"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451658 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451663 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451668 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451673 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451678 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451685 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451690 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451696 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451701 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451709 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9rpr" event={"ID":"f1218bae-4153-4490-8847-ab2d07ca0ab6","Type":"ContainerDied","Data":"d4c2fd5e63390b82da0cc1d6cff993551805081effa000d965be7b08e4c5e95c"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451718 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d86458011d93f6fe7285fb2f2cf484e62c79cf7a6171f9223b43b6413689879"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451724 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451730 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451735 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451740 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451747 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451756 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451769 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451776 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.451783 4706 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa"} Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.471141 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-node-log\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.471205 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.471233 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-host-slash\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.471360 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npp8w\" (UniqueName: \"kubernetes.io/projected/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-kube-api-access-npp8w\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.471411 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-host-slash\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.471431 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-ovnkube-script-lib\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.471457 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-host-cni-netd\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.471509 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-log-socket\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.471605 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-etc-openvswitch\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.471650 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-host-cni-bin\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.471675 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-node-log\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.471691 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-run-ovn\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.471709 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.471726 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-env-overrides\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.471731 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-run-ovn\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.471779 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-var-lib-openvswitch\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.471803 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-ovnkube-config\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.471832 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-host-run-netns\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.471855 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-run-openvswitch\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.471877 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-run-systemd\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.471901 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-host-run-ovn-kubernetes\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.471923 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-host-kubelet\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.471946 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-systemd-units\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.471982 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-ovn-node-metrics-cert\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.472035 4706 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f1218bae-4153-4490-8847-ab2d07ca0ab6-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.472047 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b55sf\" (UniqueName: \"kubernetes.io/projected/f1218bae-4153-4490-8847-ab2d07ca0ab6-kube-api-access-b55sf\") on node \"crc\" DevicePath \"\"" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.472064 4706 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f1218bae-4153-4490-8847-ab2d07ca0ab6-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.472436 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-env-overrides\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.472484 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-run-openvswitch\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.472519 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-var-lib-openvswitch\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.472871 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-ovnkube-script-lib\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.472955 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-ovnkube-config\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.472966 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-host-cni-netd\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.472999 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-host-run-netns\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.473027 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-host-run-ovn-kubernetes\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.473027 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-log-socket\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.473064 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-run-systemd\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.473248 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-etc-openvswitch\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.473286 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-host-kubelet\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.473334 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-systemd-units\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.473347 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-host-cni-bin\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.476394 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-ovn-node-metrics-cert\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.480516 4706 scope.go:117] "RemoveContainer" containerID="1d86458011d93f6fe7285fb2f2cf484e62c79cf7a6171f9223b43b6413689879" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.489127 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npp8w\" (UniqueName: \"kubernetes.io/projected/7d7014be-b45a-4b6d-ae16-ba5f61b48a23-kube-api-access-npp8w\") pod \"ovnkube-node-jpm5s\" (UID: \"7d7014be-b45a-4b6d-ae16-ba5f61b48a23\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.504265 4706 scope.go:117] "RemoveContainer" containerID="a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.509921 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-q9rpr"] Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.512793 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-q9rpr"] Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.522838 4706 scope.go:117] "RemoveContainer" containerID="62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.535205 4706 scope.go:117] "RemoveContainer" containerID="ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.552574 4706 scope.go:117] "RemoveContainer" containerID="86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.570048 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.589101 4706 scope.go:117] "RemoveContainer" containerID="e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.621173 4706 scope.go:117] "RemoveContainer" containerID="da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.649650 4706 scope.go:117] "RemoveContainer" containerID="f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.664943 4706 scope.go:117] "RemoveContainer" containerID="96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.694175 4706 scope.go:117] "RemoveContainer" containerID="56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.707910 4706 scope.go:117] "RemoveContainer" containerID="1d86458011d93f6fe7285fb2f2cf484e62c79cf7a6171f9223b43b6413689879" Nov 25 11:48:12 crc kubenswrapper[4706]: E1125 11:48:12.708482 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d86458011d93f6fe7285fb2f2cf484e62c79cf7a6171f9223b43b6413689879\": container with ID starting with 1d86458011d93f6fe7285fb2f2cf484e62c79cf7a6171f9223b43b6413689879 not found: ID does not exist" containerID="1d86458011d93f6fe7285fb2f2cf484e62c79cf7a6171f9223b43b6413689879" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.708521 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d86458011d93f6fe7285fb2f2cf484e62c79cf7a6171f9223b43b6413689879"} err="failed to get container status \"1d86458011d93f6fe7285fb2f2cf484e62c79cf7a6171f9223b43b6413689879\": rpc error: code = NotFound desc = could not find container \"1d86458011d93f6fe7285fb2f2cf484e62c79cf7a6171f9223b43b6413689879\": container with ID starting with 1d86458011d93f6fe7285fb2f2cf484e62c79cf7a6171f9223b43b6413689879 not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.708551 4706 scope.go:117] "RemoveContainer" containerID="a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5" Nov 25 11:48:12 crc kubenswrapper[4706]: E1125 11:48:12.709017 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5\": container with ID starting with a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5 not found: ID does not exist" containerID="a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.709063 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5"} err="failed to get container status \"a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5\": rpc error: code = NotFound desc = could not find container \"a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5\": container with ID starting with a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5 not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.709097 4706 scope.go:117] "RemoveContainer" containerID="62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188" Nov 25 11:48:12 crc kubenswrapper[4706]: E1125 11:48:12.709496 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\": container with ID starting with 62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188 not found: ID does not exist" containerID="62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.709531 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188"} err="failed to get container status \"62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\": rpc error: code = NotFound desc = could not find container \"62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\": container with ID starting with 62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188 not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.709554 4706 scope.go:117] "RemoveContainer" containerID="ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe" Nov 25 11:48:12 crc kubenswrapper[4706]: E1125 11:48:12.709840 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\": container with ID starting with ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe not found: ID does not exist" containerID="ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.709884 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe"} err="failed to get container status \"ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\": rpc error: code = NotFound desc = could not find container \"ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\": container with ID starting with ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.709917 4706 scope.go:117] "RemoveContainer" containerID="86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48" Nov 25 11:48:12 crc kubenswrapper[4706]: E1125 11:48:12.710227 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\": container with ID starting with 86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48 not found: ID does not exist" containerID="86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.710250 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48"} err="failed to get container status \"86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\": rpc error: code = NotFound desc = could not find container \"86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\": container with ID starting with 86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48 not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.710264 4706 scope.go:117] "RemoveContainer" containerID="e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7" Nov 25 11:48:12 crc kubenswrapper[4706]: E1125 11:48:12.710466 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\": container with ID starting with e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7 not found: ID does not exist" containerID="e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.710490 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7"} err="failed to get container status \"e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\": rpc error: code = NotFound desc = could not find container \"e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\": container with ID starting with e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7 not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.710507 4706 scope.go:117] "RemoveContainer" containerID="da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e" Nov 25 11:48:12 crc kubenswrapper[4706]: E1125 11:48:12.710747 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\": container with ID starting with da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e not found: ID does not exist" containerID="da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.710768 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e"} err="failed to get container status \"da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\": rpc error: code = NotFound desc = could not find container \"da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\": container with ID starting with da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.710781 4706 scope.go:117] "RemoveContainer" containerID="f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0" Nov 25 11:48:12 crc kubenswrapper[4706]: E1125 11:48:12.710974 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\": container with ID starting with f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0 not found: ID does not exist" containerID="f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.711007 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0"} err="failed to get container status \"f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\": rpc error: code = NotFound desc = could not find container \"f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\": container with ID starting with f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0 not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.711026 4706 scope.go:117] "RemoveContainer" containerID="96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96" Nov 25 11:48:12 crc kubenswrapper[4706]: E1125 11:48:12.711380 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\": container with ID starting with 96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96 not found: ID does not exist" containerID="96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.711407 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96"} err="failed to get container status \"96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\": rpc error: code = NotFound desc = could not find container \"96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\": container with ID starting with 96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96 not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.711423 4706 scope.go:117] "RemoveContainer" containerID="56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa" Nov 25 11:48:12 crc kubenswrapper[4706]: E1125 11:48:12.711683 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\": container with ID starting with 56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa not found: ID does not exist" containerID="56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.711707 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa"} err="failed to get container status \"56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\": rpc error: code = NotFound desc = could not find container \"56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\": container with ID starting with 56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.711720 4706 scope.go:117] "RemoveContainer" containerID="1d86458011d93f6fe7285fb2f2cf484e62c79cf7a6171f9223b43b6413689879" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.711901 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d86458011d93f6fe7285fb2f2cf484e62c79cf7a6171f9223b43b6413689879"} err="failed to get container status \"1d86458011d93f6fe7285fb2f2cf484e62c79cf7a6171f9223b43b6413689879\": rpc error: code = NotFound desc = could not find container \"1d86458011d93f6fe7285fb2f2cf484e62c79cf7a6171f9223b43b6413689879\": container with ID starting with 1d86458011d93f6fe7285fb2f2cf484e62c79cf7a6171f9223b43b6413689879 not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.711920 4706 scope.go:117] "RemoveContainer" containerID="a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.712130 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5"} err="failed to get container status \"a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5\": rpc error: code = NotFound desc = could not find container \"a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5\": container with ID starting with a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5 not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.712150 4706 scope.go:117] "RemoveContainer" containerID="62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.713185 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188"} err="failed to get container status \"62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\": rpc error: code = NotFound desc = could not find container \"62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\": container with ID starting with 62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188 not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.713211 4706 scope.go:117] "RemoveContainer" containerID="ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.713472 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe"} err="failed to get container status \"ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\": rpc error: code = NotFound desc = could not find container \"ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\": container with ID starting with ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.713495 4706 scope.go:117] "RemoveContainer" containerID="86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.713759 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48"} err="failed to get container status \"86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\": rpc error: code = NotFound desc = could not find container \"86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\": container with ID starting with 86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48 not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.713801 4706 scope.go:117] "RemoveContainer" containerID="e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.714084 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7"} err="failed to get container status \"e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\": rpc error: code = NotFound desc = could not find container \"e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\": container with ID starting with e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7 not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.714107 4706 scope.go:117] "RemoveContainer" containerID="da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.714935 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e"} err="failed to get container status \"da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\": rpc error: code = NotFound desc = could not find container \"da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\": container with ID starting with da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.714960 4706 scope.go:117] "RemoveContainer" containerID="f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.715174 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0"} err="failed to get container status \"f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\": rpc error: code = NotFound desc = could not find container \"f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\": container with ID starting with f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0 not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.715199 4706 scope.go:117] "RemoveContainer" containerID="96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.715482 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96"} err="failed to get container status \"96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\": rpc error: code = NotFound desc = could not find container \"96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\": container with ID starting with 96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96 not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.715504 4706 scope.go:117] "RemoveContainer" containerID="56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.715749 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa"} err="failed to get container status \"56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\": rpc error: code = NotFound desc = could not find container \"56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\": container with ID starting with 56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.715768 4706 scope.go:117] "RemoveContainer" containerID="1d86458011d93f6fe7285fb2f2cf484e62c79cf7a6171f9223b43b6413689879" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.715999 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d86458011d93f6fe7285fb2f2cf484e62c79cf7a6171f9223b43b6413689879"} err="failed to get container status \"1d86458011d93f6fe7285fb2f2cf484e62c79cf7a6171f9223b43b6413689879\": rpc error: code = NotFound desc = could not find container \"1d86458011d93f6fe7285fb2f2cf484e62c79cf7a6171f9223b43b6413689879\": container with ID starting with 1d86458011d93f6fe7285fb2f2cf484e62c79cf7a6171f9223b43b6413689879 not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.716047 4706 scope.go:117] "RemoveContainer" containerID="a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.716329 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5"} err="failed to get container status \"a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5\": rpc error: code = NotFound desc = could not find container \"a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5\": container with ID starting with a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5 not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.716358 4706 scope.go:117] "RemoveContainer" containerID="62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.716580 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188"} err="failed to get container status \"62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\": rpc error: code = NotFound desc = could not find container \"62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\": container with ID starting with 62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188 not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.716619 4706 scope.go:117] "RemoveContainer" containerID="ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.716983 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe"} err="failed to get container status \"ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\": rpc error: code = NotFound desc = could not find container \"ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\": container with ID starting with ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.717015 4706 scope.go:117] "RemoveContainer" containerID="86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.717200 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48"} err="failed to get container status \"86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\": rpc error: code = NotFound desc = could not find container \"86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\": container with ID starting with 86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48 not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.717253 4706 scope.go:117] "RemoveContainer" containerID="e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.717500 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7"} err="failed to get container status \"e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\": rpc error: code = NotFound desc = could not find container \"e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\": container with ID starting with e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7 not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.717519 4706 scope.go:117] "RemoveContainer" containerID="da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.717703 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e"} err="failed to get container status \"da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\": rpc error: code = NotFound desc = could not find container \"da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\": container with ID starting with da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.717737 4706 scope.go:117] "RemoveContainer" containerID="f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.717920 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0"} err="failed to get container status \"f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\": rpc error: code = NotFound desc = could not find container \"f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\": container with ID starting with f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0 not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.717940 4706 scope.go:117] "RemoveContainer" containerID="96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.718101 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96"} err="failed to get container status \"96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\": rpc error: code = NotFound desc = could not find container \"96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\": container with ID starting with 96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96 not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.718119 4706 scope.go:117] "RemoveContainer" containerID="56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.718272 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa"} err="failed to get container status \"56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\": rpc error: code = NotFound desc = could not find container \"56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\": container with ID starting with 56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.718404 4706 scope.go:117] "RemoveContainer" containerID="1d86458011d93f6fe7285fb2f2cf484e62c79cf7a6171f9223b43b6413689879" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.718583 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d86458011d93f6fe7285fb2f2cf484e62c79cf7a6171f9223b43b6413689879"} err="failed to get container status \"1d86458011d93f6fe7285fb2f2cf484e62c79cf7a6171f9223b43b6413689879\": rpc error: code = NotFound desc = could not find container \"1d86458011d93f6fe7285fb2f2cf484e62c79cf7a6171f9223b43b6413689879\": container with ID starting with 1d86458011d93f6fe7285fb2f2cf484e62c79cf7a6171f9223b43b6413689879 not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.718602 4706 scope.go:117] "RemoveContainer" containerID="a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.718761 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5"} err="failed to get container status \"a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5\": rpc error: code = NotFound desc = could not find container \"a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5\": container with ID starting with a1dfdc34e2de4aa061b93f1227bc4e3076853848aa13d8122c69d84f2a3c9bb5 not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.718778 4706 scope.go:117] "RemoveContainer" containerID="62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.719125 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188"} err="failed to get container status \"62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\": rpc error: code = NotFound desc = could not find container \"62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188\": container with ID starting with 62c923d955013808a55d99cb73f4239900fc83a2f53e1e8cceff3e9bc5768188 not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.719143 4706 scope.go:117] "RemoveContainer" containerID="ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.719369 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe"} err="failed to get container status \"ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\": rpc error: code = NotFound desc = could not find container \"ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe\": container with ID starting with ca28080773ed8c026159b2309297e1c8ccd7cf79c4c19e3a62d89bc5a95851fe not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.719387 4706 scope.go:117] "RemoveContainer" containerID="86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.719622 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48"} err="failed to get container status \"86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\": rpc error: code = NotFound desc = could not find container \"86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48\": container with ID starting with 86d79d5837993b0bfb40c7114fd69f45a9bfd2e956b5b0fe062706e920fecd48 not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.719645 4706 scope.go:117] "RemoveContainer" containerID="e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.719896 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7"} err="failed to get container status \"e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\": rpc error: code = NotFound desc = could not find container \"e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7\": container with ID starting with e92e9ade6889e5400b3c3ddff066aa544d425cf0637b75071678b8c63f8e35f7 not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.719915 4706 scope.go:117] "RemoveContainer" containerID="da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.720103 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e"} err="failed to get container status \"da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\": rpc error: code = NotFound desc = could not find container \"da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e\": container with ID starting with da5cea02464a703174faaa2a8a7dc6ba3c26bca96be0219f7304d81aba5be54e not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.720125 4706 scope.go:117] "RemoveContainer" containerID="f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.720330 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0"} err="failed to get container status \"f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\": rpc error: code = NotFound desc = could not find container \"f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0\": container with ID starting with f7df3bf6c507e0fd5fb0f32a8785d67c96f47255fdc5d2aafb8838260ac334d0 not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.720353 4706 scope.go:117] "RemoveContainer" containerID="96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.720569 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96"} err="failed to get container status \"96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\": rpc error: code = NotFound desc = could not find container \"96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96\": container with ID starting with 96aa7fcebdc88f01d2260f95d255244e28c30d422f954da2222a5b7c17d05b96 not found: ID does not exist" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.720588 4706 scope.go:117] "RemoveContainer" containerID="56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa" Nov 25 11:48:12 crc kubenswrapper[4706]: I1125 11:48:12.720807 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa"} err="failed to get container status \"56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\": rpc error: code = NotFound desc = could not find container \"56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa\": container with ID starting with 56474d5374e1047078d38c60dcbd00f4495bcc0d651a9e75fa70d64e34b10baa not found: ID does not exist" Nov 25 11:48:13 crc kubenswrapper[4706]: I1125 11:48:13.458673 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s47nr_9912058e-28f5-4cec-9eeb-03e37e0dc5c1/kube-multus/2.log" Nov 25 11:48:13 crc kubenswrapper[4706]: I1125 11:48:13.462029 4706 generic.go:334] "Generic (PLEG): container finished" podID="7d7014be-b45a-4b6d-ae16-ba5f61b48a23" containerID="2ddd4ea570a6a3c803b0bb8c0426bea195d0ebf3309fa880d49c14f5a9ebf7f5" exitCode=0 Nov 25 11:48:13 crc kubenswrapper[4706]: I1125 11:48:13.462077 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" event={"ID":"7d7014be-b45a-4b6d-ae16-ba5f61b48a23","Type":"ContainerDied","Data":"2ddd4ea570a6a3c803b0bb8c0426bea195d0ebf3309fa880d49c14f5a9ebf7f5"} Nov 25 11:48:13 crc kubenswrapper[4706]: I1125 11:48:13.462112 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" event={"ID":"7d7014be-b45a-4b6d-ae16-ba5f61b48a23","Type":"ContainerStarted","Data":"d3aa97d9b68e1c3c10e61e606116892eaad014aabaf600ba1982dd2cbf1517a5"} Nov 25 11:48:13 crc kubenswrapper[4706]: I1125 11:48:13.929129 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1218bae-4153-4490-8847-ab2d07ca0ab6" path="/var/lib/kubelet/pods/f1218bae-4153-4490-8847-ab2d07ca0ab6/volumes" Nov 25 11:48:14 crc kubenswrapper[4706]: I1125 11:48:14.472844 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" event={"ID":"7d7014be-b45a-4b6d-ae16-ba5f61b48a23","Type":"ContainerStarted","Data":"b10c7e99b8b16aef2f85420e88f4073ca173c7d3292d30b5db6b703c94762d74"} Nov 25 11:48:14 crc kubenswrapper[4706]: I1125 11:48:14.472906 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" event={"ID":"7d7014be-b45a-4b6d-ae16-ba5f61b48a23","Type":"ContainerStarted","Data":"ca3034e61b1e834e7b2a2b750fa59b57ccaa0ce4e48c6a1b08e87eac1c88136b"} Nov 25 11:48:14 crc kubenswrapper[4706]: I1125 11:48:14.472917 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" event={"ID":"7d7014be-b45a-4b6d-ae16-ba5f61b48a23","Type":"ContainerStarted","Data":"94650ef40ab19f520bd6fb347ba0535b77d26620d3563b1e62ffcd12a3885909"} Nov 25 11:48:14 crc kubenswrapper[4706]: I1125 11:48:14.472928 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" event={"ID":"7d7014be-b45a-4b6d-ae16-ba5f61b48a23","Type":"ContainerStarted","Data":"0f14f331e2cb01f993413a1e72835750d9532c353363851107ec933094dc111f"} Nov 25 11:48:14 crc kubenswrapper[4706]: I1125 11:48:14.472939 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" event={"ID":"7d7014be-b45a-4b6d-ae16-ba5f61b48a23","Type":"ContainerStarted","Data":"64b6d1fbb2bcbb6c741705d6c7245d76441efd2e5a08cf5cb36f1ac8da6fe5f1"} Nov 25 11:48:15 crc kubenswrapper[4706]: I1125 11:48:15.481440 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" event={"ID":"7d7014be-b45a-4b6d-ae16-ba5f61b48a23","Type":"ContainerStarted","Data":"a0c16f6342de683ceaf51b8aeea8ca47c154e4c5380484b2f71033fd5f0b8742"} Nov 25 11:48:17 crc kubenswrapper[4706]: I1125 11:48:17.509241 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" event={"ID":"7d7014be-b45a-4b6d-ae16-ba5f61b48a23","Type":"ContainerStarted","Data":"a4eb612273e1de1fcb0efeac68ff0879b133023e0473d274275e5ef33959b44b"} Nov 25 11:48:19 crc kubenswrapper[4706]: I1125 11:48:19.526136 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" event={"ID":"7d7014be-b45a-4b6d-ae16-ba5f61b48a23","Type":"ContainerStarted","Data":"c2621ae7562a44bcc636a90d333c2e3a2d40c0d914abf2d02b4ce5fcabeed890"} Nov 25 11:48:19 crc kubenswrapper[4706]: I1125 11:48:19.527113 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:19 crc kubenswrapper[4706]: I1125 11:48:19.527140 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:19 crc kubenswrapper[4706]: I1125 11:48:19.527152 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:19 crc kubenswrapper[4706]: I1125 11:48:19.556143 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:19 crc kubenswrapper[4706]: I1125 11:48:19.557366 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" podStartSLOduration=7.55735196 podStartE2EDuration="7.55735196s" podCreationTimestamp="2025-11-25 11:48:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:48:19.554828437 +0000 UTC m=+708.469385818" watchObservedRunningTime="2025-11-25 11:48:19.55735196 +0000 UTC m=+708.471909341" Nov 25 11:48:19 crc kubenswrapper[4706]: I1125 11:48:19.564188 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:24 crc kubenswrapper[4706]: I1125 11:48:24.922569 4706 scope.go:117] "RemoveContainer" containerID="198cfd82640633cc783bf590d5743bed75f93473c1ccd934ea506aef32ea6201" Nov 25 11:48:24 crc kubenswrapper[4706]: E1125 11:48:24.923266 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-s47nr_openshift-multus(9912058e-28f5-4cec-9eeb-03e37e0dc5c1)\"" pod="openshift-multus/multus-s47nr" podUID="9912058e-28f5-4cec-9eeb-03e37e0dc5c1" Nov 25 11:48:31 crc kubenswrapper[4706]: I1125 11:48:31.125750 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 11:48:31 crc kubenswrapper[4706]: I1125 11:48:31.127211 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 11:48:36 crc kubenswrapper[4706]: I1125 11:48:36.922793 4706 scope.go:117] "RemoveContainer" containerID="198cfd82640633cc783bf590d5743bed75f93473c1ccd934ea506aef32ea6201" Nov 25 11:48:37 crc kubenswrapper[4706]: I1125 11:48:37.638584 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s47nr_9912058e-28f5-4cec-9eeb-03e37e0dc5c1/kube-multus/2.log" Nov 25 11:48:37 crc kubenswrapper[4706]: I1125 11:48:37.639003 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-s47nr" event={"ID":"9912058e-28f5-4cec-9eeb-03e37e0dc5c1","Type":"ContainerStarted","Data":"0e1ada62de470ffaa1d13a32dc145e916c9aecaa20cbce89e567d1afa68ac6fe"} Nov 25 11:48:42 crc kubenswrapper[4706]: I1125 11:48:42.595837 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jpm5s" Nov 25 11:48:57 crc kubenswrapper[4706]: I1125 11:48:57.310131 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zf4pd"] Nov 25 11:48:57 crc kubenswrapper[4706]: I1125 11:48:57.311073 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-zf4pd" podUID="c31bc178-49e3-4bb8-a6d0-ca9e27662b9a" containerName="controller-manager" containerID="cri-o://ca43a5ab551800e1a7600a9c40946c9b8821c5bd86df830dc16ccfede1c21037" gracePeriod=30 Nov 25 11:48:57 crc kubenswrapper[4706]: I1125 11:48:57.448898 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j7x2j"] Nov 25 11:48:57 crc kubenswrapper[4706]: I1125 11:48:57.449186 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j7x2j" podUID="8cd4c256-91b7-4b76-a9d3-6927ea77e61e" containerName="route-controller-manager" containerID="cri-o://ab384ce4e7c7b861b8b5646b14e994534e5e8213032d88f360cc56c5341f714f" gracePeriod=30 Nov 25 11:48:57 crc kubenswrapper[4706]: I1125 11:48:57.757077 4706 generic.go:334] "Generic (PLEG): container finished" podID="c31bc178-49e3-4bb8-a6d0-ca9e27662b9a" containerID="ca43a5ab551800e1a7600a9c40946c9b8821c5bd86df830dc16ccfede1c21037" exitCode=0 Nov 25 11:48:57 crc kubenswrapper[4706]: I1125 11:48:57.757214 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zf4pd" event={"ID":"c31bc178-49e3-4bb8-a6d0-ca9e27662b9a","Type":"ContainerDied","Data":"ca43a5ab551800e1a7600a9c40946c9b8821c5bd86df830dc16ccfede1c21037"} Nov 25 11:48:57 crc kubenswrapper[4706]: I1125 11:48:57.759950 4706 generic.go:334] "Generic (PLEG): container finished" podID="8cd4c256-91b7-4b76-a9d3-6927ea77e61e" containerID="ab384ce4e7c7b861b8b5646b14e994534e5e8213032d88f360cc56c5341f714f" exitCode=0 Nov 25 11:48:57 crc kubenswrapper[4706]: I1125 11:48:57.760007 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j7x2j" event={"ID":"8cd4c256-91b7-4b76-a9d3-6927ea77e61e","Type":"ContainerDied","Data":"ab384ce4e7c7b861b8b5646b14e994534e5e8213032d88f360cc56c5341f714f"} Nov 25 11:48:57 crc kubenswrapper[4706]: I1125 11:48:57.949016 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j7x2j" Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.004809 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cd4c256-91b7-4b76-a9d3-6927ea77e61e-config\") pod \"8cd4c256-91b7-4b76-a9d3-6927ea77e61e\" (UID: \"8cd4c256-91b7-4b76-a9d3-6927ea77e61e\") " Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.005032 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5qrs\" (UniqueName: \"kubernetes.io/projected/8cd4c256-91b7-4b76-a9d3-6927ea77e61e-kube-api-access-x5qrs\") pod \"8cd4c256-91b7-4b76-a9d3-6927ea77e61e\" (UID: \"8cd4c256-91b7-4b76-a9d3-6927ea77e61e\") " Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.005051 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8cd4c256-91b7-4b76-a9d3-6927ea77e61e-client-ca\") pod \"8cd4c256-91b7-4b76-a9d3-6927ea77e61e\" (UID: \"8cd4c256-91b7-4b76-a9d3-6927ea77e61e\") " Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.005066 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cd4c256-91b7-4b76-a9d3-6927ea77e61e-serving-cert\") pod \"8cd4c256-91b7-4b76-a9d3-6927ea77e61e\" (UID: \"8cd4c256-91b7-4b76-a9d3-6927ea77e61e\") " Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.007845 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cd4c256-91b7-4b76-a9d3-6927ea77e61e-config" (OuterVolumeSpecName: "config") pod "8cd4c256-91b7-4b76-a9d3-6927ea77e61e" (UID: "8cd4c256-91b7-4b76-a9d3-6927ea77e61e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.008578 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cd4c256-91b7-4b76-a9d3-6927ea77e61e-client-ca" (OuterVolumeSpecName: "client-ca") pod "8cd4c256-91b7-4b76-a9d3-6927ea77e61e" (UID: "8cd4c256-91b7-4b76-a9d3-6927ea77e61e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.022602 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cd4c256-91b7-4b76-a9d3-6927ea77e61e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cd4c256-91b7-4b76-a9d3-6927ea77e61e" (UID: "8cd4c256-91b7-4b76-a9d3-6927ea77e61e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.033892 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cd4c256-91b7-4b76-a9d3-6927ea77e61e-kube-api-access-x5qrs" (OuterVolumeSpecName: "kube-api-access-x5qrs") pod "8cd4c256-91b7-4b76-a9d3-6927ea77e61e" (UID: "8cd4c256-91b7-4b76-a9d3-6927ea77e61e"). InnerVolumeSpecName "kube-api-access-x5qrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.106477 4706 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8cd4c256-91b7-4b76-a9d3-6927ea77e61e-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.106528 4706 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cd4c256-91b7-4b76-a9d3-6927ea77e61e-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.106538 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5qrs\" (UniqueName: \"kubernetes.io/projected/8cd4c256-91b7-4b76-a9d3-6927ea77e61e-kube-api-access-x5qrs\") on node \"crc\" DevicePath \"\"" Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.106552 4706 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cd4c256-91b7-4b76-a9d3-6927ea77e61e-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.243474 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zf4pd" Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.309716 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg74s\" (UniqueName: \"kubernetes.io/projected/c31bc178-49e3-4bb8-a6d0-ca9e27662b9a-kube-api-access-sg74s\") pod \"c31bc178-49e3-4bb8-a6d0-ca9e27662b9a\" (UID: \"c31bc178-49e3-4bb8-a6d0-ca9e27662b9a\") " Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.309786 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c31bc178-49e3-4bb8-a6d0-ca9e27662b9a-client-ca\") pod \"c31bc178-49e3-4bb8-a6d0-ca9e27662b9a\" (UID: \"c31bc178-49e3-4bb8-a6d0-ca9e27662b9a\") " Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.309820 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c31bc178-49e3-4bb8-a6d0-ca9e27662b9a-config\") pod \"c31bc178-49e3-4bb8-a6d0-ca9e27662b9a\" (UID: \"c31bc178-49e3-4bb8-a6d0-ca9e27662b9a\") " Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.309890 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c31bc178-49e3-4bb8-a6d0-ca9e27662b9a-serving-cert\") pod \"c31bc178-49e3-4bb8-a6d0-ca9e27662b9a\" (UID: \"c31bc178-49e3-4bb8-a6d0-ca9e27662b9a\") " Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.309978 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c31bc178-49e3-4bb8-a6d0-ca9e27662b9a-proxy-ca-bundles\") pod \"c31bc178-49e3-4bb8-a6d0-ca9e27662b9a\" (UID: \"c31bc178-49e3-4bb8-a6d0-ca9e27662b9a\") " Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.310730 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c31bc178-49e3-4bb8-a6d0-ca9e27662b9a-client-ca" (OuterVolumeSpecName: "client-ca") pod "c31bc178-49e3-4bb8-a6d0-ca9e27662b9a" (UID: "c31bc178-49e3-4bb8-a6d0-ca9e27662b9a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.310849 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c31bc178-49e3-4bb8-a6d0-ca9e27662b9a-config" (OuterVolumeSpecName: "config") pod "c31bc178-49e3-4bb8-a6d0-ca9e27662b9a" (UID: "c31bc178-49e3-4bb8-a6d0-ca9e27662b9a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.310889 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c31bc178-49e3-4bb8-a6d0-ca9e27662b9a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c31bc178-49e3-4bb8-a6d0-ca9e27662b9a" (UID: "c31bc178-49e3-4bb8-a6d0-ca9e27662b9a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.314678 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c31bc178-49e3-4bb8-a6d0-ca9e27662b9a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c31bc178-49e3-4bb8-a6d0-ca9e27662b9a" (UID: "c31bc178-49e3-4bb8-a6d0-ca9e27662b9a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.317962 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c31bc178-49e3-4bb8-a6d0-ca9e27662b9a-kube-api-access-sg74s" (OuterVolumeSpecName: "kube-api-access-sg74s") pod "c31bc178-49e3-4bb8-a6d0-ca9e27662b9a" (UID: "c31bc178-49e3-4bb8-a6d0-ca9e27662b9a"). InnerVolumeSpecName "kube-api-access-sg74s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.411331 4706 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c31bc178-49e3-4bb8-a6d0-ca9e27662b9a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.411383 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg74s\" (UniqueName: \"kubernetes.io/projected/c31bc178-49e3-4bb8-a6d0-ca9e27662b9a-kube-api-access-sg74s\") on node \"crc\" DevicePath \"\"" Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.411402 4706 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c31bc178-49e3-4bb8-a6d0-ca9e27662b9a-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.411418 4706 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c31bc178-49e3-4bb8-a6d0-ca9e27662b9a-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.411432 4706 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c31bc178-49e3-4bb8-a6d0-ca9e27662b9a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.767592 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j7x2j" Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.767590 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j7x2j" event={"ID":"8cd4c256-91b7-4b76-a9d3-6927ea77e61e","Type":"ContainerDied","Data":"ce3c60198e11b985d403328021a23d9ba4f0f30ea762a0582de78380240dc2eb"} Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.767840 4706 scope.go:117] "RemoveContainer" containerID="ab384ce4e7c7b861b8b5646b14e994534e5e8213032d88f360cc56c5341f714f" Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.769865 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zf4pd" event={"ID":"c31bc178-49e3-4bb8-a6d0-ca9e27662b9a","Type":"ContainerDied","Data":"2889822a2c9c2c44c23ec80ec811bdc010023ca3ec00ab853e494408c01e510f"} Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.769925 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zf4pd" Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.787960 4706 scope.go:117] "RemoveContainer" containerID="ca43a5ab551800e1a7600a9c40946c9b8821c5bd86df830dc16ccfede1c21037" Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.805143 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j7x2j"] Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.809219 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j7x2j"] Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.838558 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zf4pd"] Nov 25 11:48:58 crc kubenswrapper[4706]: I1125 11:48:58.844710 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zf4pd"] Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.003916 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc"] Nov 25 11:48:59 crc kubenswrapper[4706]: E1125 11:48:59.004395 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cd4c256-91b7-4b76-a9d3-6927ea77e61e" containerName="route-controller-manager" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.004473 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cd4c256-91b7-4b76-a9d3-6927ea77e61e" containerName="route-controller-manager" Nov 25 11:48:59 crc kubenswrapper[4706]: E1125 11:48:59.004543 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c31bc178-49e3-4bb8-a6d0-ca9e27662b9a" containerName="controller-manager" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.004601 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="c31bc178-49e3-4bb8-a6d0-ca9e27662b9a" containerName="controller-manager" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.004763 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cd4c256-91b7-4b76-a9d3-6927ea77e61e" containerName="route-controller-manager" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.004836 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="c31bc178-49e3-4bb8-a6d0-ca9e27662b9a" containerName="controller-manager" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.005744 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.008512 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.021401 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc"] Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.124651 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5xl2\" (UniqueName: \"kubernetes.io/projected/05fa0078-a8e0-4b75-a7a8-d5ec5f21e034-kube-api-access-t5xl2\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc\" (UID: \"05fa0078-a8e0-4b75-a7a8-d5ec5f21e034\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.124798 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/05fa0078-a8e0-4b75-a7a8-d5ec5f21e034-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc\" (UID: \"05fa0078-a8e0-4b75-a7a8-d5ec5f21e034\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.124898 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/05fa0078-a8e0-4b75-a7a8-d5ec5f21e034-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc\" (UID: \"05fa0078-a8e0-4b75-a7a8-d5ec5f21e034\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.195337 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8fffdf67b-zjkc9"] Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.196210 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8fffdf67b-zjkc9" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.199527 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.199628 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.199633 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.199769 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.200867 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.201080 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.204754 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-869675b6d5-6gcgl"] Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.205882 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-869675b6d5-6gcgl" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.208197 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.208248 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.208536 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.208841 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.208869 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.208917 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.209899 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.211162 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-869675b6d5-6gcgl"] Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.216331 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8fffdf67b-zjkc9"] Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.226011 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d08b18f9-4fbd-4e86-99d4-7958d02246fb-client-ca\") pod \"controller-manager-8fffdf67b-zjkc9\" (UID: \"d08b18f9-4fbd-4e86-99d4-7958d02246fb\") " pod="openshift-controller-manager/controller-manager-8fffdf67b-zjkc9" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.226056 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d08b18f9-4fbd-4e86-99d4-7958d02246fb-serving-cert\") pod \"controller-manager-8fffdf67b-zjkc9\" (UID: \"d08b18f9-4fbd-4e86-99d4-7958d02246fb\") " pod="openshift-controller-manager/controller-manager-8fffdf67b-zjkc9" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.226085 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl4pf\" (UniqueName: \"kubernetes.io/projected/d08b18f9-4fbd-4e86-99d4-7958d02246fb-kube-api-access-wl4pf\") pod \"controller-manager-8fffdf67b-zjkc9\" (UID: \"d08b18f9-4fbd-4e86-99d4-7958d02246fb\") " pod="openshift-controller-manager/controller-manager-8fffdf67b-zjkc9" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.226133 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d08b18f9-4fbd-4e86-99d4-7958d02246fb-proxy-ca-bundles\") pod \"controller-manager-8fffdf67b-zjkc9\" (UID: \"d08b18f9-4fbd-4e86-99d4-7958d02246fb\") " pod="openshift-controller-manager/controller-manager-8fffdf67b-zjkc9" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.226262 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/05fa0078-a8e0-4b75-a7a8-d5ec5f21e034-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc\" (UID: \"05fa0078-a8e0-4b75-a7a8-d5ec5f21e034\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.226332 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/05fa0078-a8e0-4b75-a7a8-d5ec5f21e034-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc\" (UID: \"05fa0078-a8e0-4b75-a7a8-d5ec5f21e034\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.226405 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5xl2\" (UniqueName: \"kubernetes.io/projected/05fa0078-a8e0-4b75-a7a8-d5ec5f21e034-kube-api-access-t5xl2\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc\" (UID: \"05fa0078-a8e0-4b75-a7a8-d5ec5f21e034\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.226630 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d08b18f9-4fbd-4e86-99d4-7958d02246fb-config\") pod \"controller-manager-8fffdf67b-zjkc9\" (UID: \"d08b18f9-4fbd-4e86-99d4-7958d02246fb\") " pod="openshift-controller-manager/controller-manager-8fffdf67b-zjkc9" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.226933 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/05fa0078-a8e0-4b75-a7a8-d5ec5f21e034-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc\" (UID: \"05fa0078-a8e0-4b75-a7a8-d5ec5f21e034\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.227098 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/05fa0078-a8e0-4b75-a7a8-d5ec5f21e034-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc\" (UID: \"05fa0078-a8e0-4b75-a7a8-d5ec5f21e034\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.246972 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5xl2\" (UniqueName: \"kubernetes.io/projected/05fa0078-a8e0-4b75-a7a8-d5ec5f21e034-kube-api-access-t5xl2\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc\" (UID: \"05fa0078-a8e0-4b75-a7a8-d5ec5f21e034\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.319893 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.327766 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d08b18f9-4fbd-4e86-99d4-7958d02246fb-client-ca\") pod \"controller-manager-8fffdf67b-zjkc9\" (UID: \"d08b18f9-4fbd-4e86-99d4-7958d02246fb\") " pod="openshift-controller-manager/controller-manager-8fffdf67b-zjkc9" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.327808 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ddad5d5a-20e1-4a01-872a-ec9a60b03ad9-serving-cert\") pod \"route-controller-manager-869675b6d5-6gcgl\" (UID: \"ddad5d5a-20e1-4a01-872a-ec9a60b03ad9\") " pod="openshift-route-controller-manager/route-controller-manager-869675b6d5-6gcgl" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.327837 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d08b18f9-4fbd-4e86-99d4-7958d02246fb-serving-cert\") pod \"controller-manager-8fffdf67b-zjkc9\" (UID: \"d08b18f9-4fbd-4e86-99d4-7958d02246fb\") " pod="openshift-controller-manager/controller-manager-8fffdf67b-zjkc9" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.327855 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl4pf\" (UniqueName: \"kubernetes.io/projected/d08b18f9-4fbd-4e86-99d4-7958d02246fb-kube-api-access-wl4pf\") pod \"controller-manager-8fffdf67b-zjkc9\" (UID: \"d08b18f9-4fbd-4e86-99d4-7958d02246fb\") " pod="openshift-controller-manager/controller-manager-8fffdf67b-zjkc9" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.327880 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d08b18f9-4fbd-4e86-99d4-7958d02246fb-proxy-ca-bundles\") pod \"controller-manager-8fffdf67b-zjkc9\" (UID: \"d08b18f9-4fbd-4e86-99d4-7958d02246fb\") " pod="openshift-controller-manager/controller-manager-8fffdf67b-zjkc9" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.327916 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqslq\" (UniqueName: \"kubernetes.io/projected/ddad5d5a-20e1-4a01-872a-ec9a60b03ad9-kube-api-access-vqslq\") pod \"route-controller-manager-869675b6d5-6gcgl\" (UID: \"ddad5d5a-20e1-4a01-872a-ec9a60b03ad9\") " pod="openshift-route-controller-manager/route-controller-manager-869675b6d5-6gcgl" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.327942 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ddad5d5a-20e1-4a01-872a-ec9a60b03ad9-client-ca\") pod \"route-controller-manager-869675b6d5-6gcgl\" (UID: \"ddad5d5a-20e1-4a01-872a-ec9a60b03ad9\") " pod="openshift-route-controller-manager/route-controller-manager-869675b6d5-6gcgl" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.327970 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d08b18f9-4fbd-4e86-99d4-7958d02246fb-config\") pod \"controller-manager-8fffdf67b-zjkc9\" (UID: \"d08b18f9-4fbd-4e86-99d4-7958d02246fb\") " pod="openshift-controller-manager/controller-manager-8fffdf67b-zjkc9" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.327989 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddad5d5a-20e1-4a01-872a-ec9a60b03ad9-config\") pod \"route-controller-manager-869675b6d5-6gcgl\" (UID: \"ddad5d5a-20e1-4a01-872a-ec9a60b03ad9\") " pod="openshift-route-controller-manager/route-controller-manager-869675b6d5-6gcgl" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.329018 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d08b18f9-4fbd-4e86-99d4-7958d02246fb-client-ca\") pod \"controller-manager-8fffdf67b-zjkc9\" (UID: \"d08b18f9-4fbd-4e86-99d4-7958d02246fb\") " pod="openshift-controller-manager/controller-manager-8fffdf67b-zjkc9" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.329562 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d08b18f9-4fbd-4e86-99d4-7958d02246fb-proxy-ca-bundles\") pod \"controller-manager-8fffdf67b-zjkc9\" (UID: \"d08b18f9-4fbd-4e86-99d4-7958d02246fb\") " pod="openshift-controller-manager/controller-manager-8fffdf67b-zjkc9" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.329629 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d08b18f9-4fbd-4e86-99d4-7958d02246fb-config\") pod \"controller-manager-8fffdf67b-zjkc9\" (UID: \"d08b18f9-4fbd-4e86-99d4-7958d02246fb\") " pod="openshift-controller-manager/controller-manager-8fffdf67b-zjkc9" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.331737 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d08b18f9-4fbd-4e86-99d4-7958d02246fb-serving-cert\") pod \"controller-manager-8fffdf67b-zjkc9\" (UID: \"d08b18f9-4fbd-4e86-99d4-7958d02246fb\") " pod="openshift-controller-manager/controller-manager-8fffdf67b-zjkc9" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.350142 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl4pf\" (UniqueName: \"kubernetes.io/projected/d08b18f9-4fbd-4e86-99d4-7958d02246fb-kube-api-access-wl4pf\") pod \"controller-manager-8fffdf67b-zjkc9\" (UID: \"d08b18f9-4fbd-4e86-99d4-7958d02246fb\") " pod="openshift-controller-manager/controller-manager-8fffdf67b-zjkc9" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.429481 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ddad5d5a-20e1-4a01-872a-ec9a60b03ad9-serving-cert\") pod \"route-controller-manager-869675b6d5-6gcgl\" (UID: \"ddad5d5a-20e1-4a01-872a-ec9a60b03ad9\") " pod="openshift-route-controller-manager/route-controller-manager-869675b6d5-6gcgl" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.429974 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqslq\" (UniqueName: \"kubernetes.io/projected/ddad5d5a-20e1-4a01-872a-ec9a60b03ad9-kube-api-access-vqslq\") pod \"route-controller-manager-869675b6d5-6gcgl\" (UID: \"ddad5d5a-20e1-4a01-872a-ec9a60b03ad9\") " pod="openshift-route-controller-manager/route-controller-manager-869675b6d5-6gcgl" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.430071 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ddad5d5a-20e1-4a01-872a-ec9a60b03ad9-client-ca\") pod \"route-controller-manager-869675b6d5-6gcgl\" (UID: \"ddad5d5a-20e1-4a01-872a-ec9a60b03ad9\") " pod="openshift-route-controller-manager/route-controller-manager-869675b6d5-6gcgl" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.430136 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddad5d5a-20e1-4a01-872a-ec9a60b03ad9-config\") pod \"route-controller-manager-869675b6d5-6gcgl\" (UID: \"ddad5d5a-20e1-4a01-872a-ec9a60b03ad9\") " pod="openshift-route-controller-manager/route-controller-manager-869675b6d5-6gcgl" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.431043 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ddad5d5a-20e1-4a01-872a-ec9a60b03ad9-client-ca\") pod \"route-controller-manager-869675b6d5-6gcgl\" (UID: \"ddad5d5a-20e1-4a01-872a-ec9a60b03ad9\") " pod="openshift-route-controller-manager/route-controller-manager-869675b6d5-6gcgl" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.431644 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddad5d5a-20e1-4a01-872a-ec9a60b03ad9-config\") pod \"route-controller-manager-869675b6d5-6gcgl\" (UID: \"ddad5d5a-20e1-4a01-872a-ec9a60b03ad9\") " pod="openshift-route-controller-manager/route-controller-manager-869675b6d5-6gcgl" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.445887 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ddad5d5a-20e1-4a01-872a-ec9a60b03ad9-serving-cert\") pod \"route-controller-manager-869675b6d5-6gcgl\" (UID: \"ddad5d5a-20e1-4a01-872a-ec9a60b03ad9\") " pod="openshift-route-controller-manager/route-controller-manager-869675b6d5-6gcgl" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.449892 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqslq\" (UniqueName: \"kubernetes.io/projected/ddad5d5a-20e1-4a01-872a-ec9a60b03ad9-kube-api-access-vqslq\") pod \"route-controller-manager-869675b6d5-6gcgl\" (UID: \"ddad5d5a-20e1-4a01-872a-ec9a60b03ad9\") " pod="openshift-route-controller-manager/route-controller-manager-869675b6d5-6gcgl" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.517233 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8fffdf67b-zjkc9" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.526581 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-869675b6d5-6gcgl" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.740793 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc"] Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.778963 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc" event={"ID":"05fa0078-a8e0-4b75-a7a8-d5ec5f21e034","Type":"ContainerStarted","Data":"2fc574de6288a711e64ac7885d55e0012d44d195f078ce44f6a7d644156e3373"} Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.798389 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8fffdf67b-zjkc9"] Nov 25 11:48:59 crc kubenswrapper[4706]: W1125 11:48:59.807976 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd08b18f9_4fbd_4e86_99d4_7958d02246fb.slice/crio-9730e60dc6df512a1dda2b55cba44e3e43ac22312eab7acee1dc3c019190d20f WatchSource:0}: Error finding container 9730e60dc6df512a1dda2b55cba44e3e43ac22312eab7acee1dc3c019190d20f: Status 404 returned error can't find the container with id 9730e60dc6df512a1dda2b55cba44e3e43ac22312eab7acee1dc3c019190d20f Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.930889 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cd4c256-91b7-4b76-a9d3-6927ea77e61e" path="/var/lib/kubelet/pods/8cd4c256-91b7-4b76-a9d3-6927ea77e61e/volumes" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.931762 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c31bc178-49e3-4bb8-a6d0-ca9e27662b9a" path="/var/lib/kubelet/pods/c31bc178-49e3-4bb8-a6d0-ca9e27662b9a/volumes" Nov 25 11:48:59 crc kubenswrapper[4706]: I1125 11:48:59.959249 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-869675b6d5-6gcgl"] Nov 25 11:48:59 crc kubenswrapper[4706]: W1125 11:48:59.965596 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddad5d5a_20e1_4a01_872a_ec9a60b03ad9.slice/crio-22cc11a09bdf5fb86e292628e3d39fa5df50c4b74e09010dbc7a691d649274bd WatchSource:0}: Error finding container 22cc11a09bdf5fb86e292628e3d39fa5df50c4b74e09010dbc7a691d649274bd: Status 404 returned error can't find the container with id 22cc11a09bdf5fb86e292628e3d39fa5df50c4b74e09010dbc7a691d649274bd Nov 25 11:49:00 crc kubenswrapper[4706]: I1125 11:49:00.788225 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-869675b6d5-6gcgl" event={"ID":"ddad5d5a-20e1-4a01-872a-ec9a60b03ad9","Type":"ContainerStarted","Data":"b9415a26e39b98d335bd71be9680361a5c7f602da3798a7f963bf0efdfe40a3b"} Nov 25 11:49:00 crc kubenswrapper[4706]: I1125 11:49:00.789156 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-869675b6d5-6gcgl" Nov 25 11:49:00 crc kubenswrapper[4706]: I1125 11:49:00.789172 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-869675b6d5-6gcgl" event={"ID":"ddad5d5a-20e1-4a01-872a-ec9a60b03ad9","Type":"ContainerStarted","Data":"22cc11a09bdf5fb86e292628e3d39fa5df50c4b74e09010dbc7a691d649274bd"} Nov 25 11:49:00 crc kubenswrapper[4706]: I1125 11:49:00.790050 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc" event={"ID":"05fa0078-a8e0-4b75-a7a8-d5ec5f21e034","Type":"ContainerStarted","Data":"96a5c9643cd12dbff225946b5ee58c221f038f3dcb74d79adc0857b1f59b131e"} Nov 25 11:49:00 crc kubenswrapper[4706]: I1125 11:49:00.792072 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8fffdf67b-zjkc9" event={"ID":"d08b18f9-4fbd-4e86-99d4-7958d02246fb","Type":"ContainerStarted","Data":"7010a17a21eac2025142f96fe4143f6fbe40cafd1f6260de7769bd4dc95b20ce"} Nov 25 11:49:00 crc kubenswrapper[4706]: I1125 11:49:00.792095 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8fffdf67b-zjkc9" event={"ID":"d08b18f9-4fbd-4e86-99d4-7958d02246fb","Type":"ContainerStarted","Data":"9730e60dc6df512a1dda2b55cba44e3e43ac22312eab7acee1dc3c019190d20f"} Nov 25 11:49:00 crc kubenswrapper[4706]: I1125 11:49:00.792572 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-8fffdf67b-zjkc9" Nov 25 11:49:00 crc kubenswrapper[4706]: I1125 11:49:00.794347 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-869675b6d5-6gcgl" Nov 25 11:49:00 crc kubenswrapper[4706]: I1125 11:49:00.796885 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8fffdf67b-zjkc9" Nov 25 11:49:00 crc kubenswrapper[4706]: I1125 11:49:00.808698 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-869675b6d5-6gcgl" podStartSLOduration=3.808675353 podStartE2EDuration="3.808675353s" podCreationTimestamp="2025-11-25 11:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:49:00.807723493 +0000 UTC m=+749.722280874" watchObservedRunningTime="2025-11-25 11:49:00.808675353 +0000 UTC m=+749.723232734" Nov 25 11:49:00 crc kubenswrapper[4706]: I1125 11:49:00.830396 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8fffdf67b-zjkc9" podStartSLOduration=3.830373679 podStartE2EDuration="3.830373679s" podCreationTimestamp="2025-11-25 11:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:49:00.828076271 +0000 UTC m=+749.742633652" watchObservedRunningTime="2025-11-25 11:49:00.830373679 +0000 UTC m=+749.744931060" Nov 25 11:49:01 crc kubenswrapper[4706]: I1125 11:49:01.124931 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 11:49:01 crc kubenswrapper[4706]: I1125 11:49:01.125022 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 11:49:01 crc kubenswrapper[4706]: I1125 11:49:01.800008 4706 generic.go:334] "Generic (PLEG): container finished" podID="05fa0078-a8e0-4b75-a7a8-d5ec5f21e034" containerID="96a5c9643cd12dbff225946b5ee58c221f038f3dcb74d79adc0857b1f59b131e" exitCode=0 Nov 25 11:49:01 crc kubenswrapper[4706]: I1125 11:49:01.800104 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc" event={"ID":"05fa0078-a8e0-4b75-a7a8-d5ec5f21e034","Type":"ContainerDied","Data":"96a5c9643cd12dbff225946b5ee58c221f038f3dcb74d79adc0857b1f59b131e"} Nov 25 11:49:04 crc kubenswrapper[4706]: I1125 11:49:04.820742 4706 generic.go:334] "Generic (PLEG): container finished" podID="05fa0078-a8e0-4b75-a7a8-d5ec5f21e034" containerID="ea142136a3381727c65add38010dbb146516d6b640a39da6a5e83e14101b7283" exitCode=0 Nov 25 11:49:04 crc kubenswrapper[4706]: I1125 11:49:04.820830 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc" event={"ID":"05fa0078-a8e0-4b75-a7a8-d5ec5f21e034","Type":"ContainerDied","Data":"ea142136a3381727c65add38010dbb146516d6b640a39da6a5e83e14101b7283"} Nov 25 11:49:05 crc kubenswrapper[4706]: I1125 11:49:05.841422 4706 generic.go:334] "Generic (PLEG): container finished" podID="05fa0078-a8e0-4b75-a7a8-d5ec5f21e034" containerID="77d2497dd44bfdd0f469780023e413c768a8dace64ca5a6743012556c99d3290" exitCode=0 Nov 25 11:49:05 crc kubenswrapper[4706]: I1125 11:49:05.841958 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc" event={"ID":"05fa0078-a8e0-4b75-a7a8-d5ec5f21e034","Type":"ContainerDied","Data":"77d2497dd44bfdd0f469780023e413c768a8dace64ca5a6743012556c99d3290"} Nov 25 11:49:06 crc kubenswrapper[4706]: I1125 11:49:06.758237 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hcv5z"] Nov 25 11:49:06 crc kubenswrapper[4706]: I1125 11:49:06.759775 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hcv5z" Nov 25 11:49:06 crc kubenswrapper[4706]: I1125 11:49:06.773522 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hcv5z"] Nov 25 11:49:06 crc kubenswrapper[4706]: I1125 11:49:06.864757 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9-catalog-content\") pod \"redhat-operators-hcv5z\" (UID: \"3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9\") " pod="openshift-marketplace/redhat-operators-hcv5z" Nov 25 11:49:06 crc kubenswrapper[4706]: I1125 11:49:06.864843 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zld5\" (UniqueName: \"kubernetes.io/projected/3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9-kube-api-access-4zld5\") pod \"redhat-operators-hcv5z\" (UID: \"3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9\") " pod="openshift-marketplace/redhat-operators-hcv5z" Nov 25 11:49:06 crc kubenswrapper[4706]: I1125 11:49:06.864874 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9-utilities\") pod \"redhat-operators-hcv5z\" (UID: \"3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9\") " pod="openshift-marketplace/redhat-operators-hcv5z" Nov 25 11:49:06 crc kubenswrapper[4706]: I1125 11:49:06.966525 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9-catalog-content\") pod \"redhat-operators-hcv5z\" (UID: \"3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9\") " pod="openshift-marketplace/redhat-operators-hcv5z" Nov 25 11:49:06 crc kubenswrapper[4706]: I1125 11:49:06.966626 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zld5\" (UniqueName: \"kubernetes.io/projected/3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9-kube-api-access-4zld5\") pod \"redhat-operators-hcv5z\" (UID: \"3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9\") " pod="openshift-marketplace/redhat-operators-hcv5z" Nov 25 11:49:06 crc kubenswrapper[4706]: I1125 11:49:06.966663 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9-utilities\") pod \"redhat-operators-hcv5z\" (UID: \"3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9\") " pod="openshift-marketplace/redhat-operators-hcv5z" Nov 25 11:49:06 crc kubenswrapper[4706]: I1125 11:49:06.967392 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9-utilities\") pod \"redhat-operators-hcv5z\" (UID: \"3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9\") " pod="openshift-marketplace/redhat-operators-hcv5z" Nov 25 11:49:06 crc kubenswrapper[4706]: I1125 11:49:06.967514 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9-catalog-content\") pod \"redhat-operators-hcv5z\" (UID: \"3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9\") " pod="openshift-marketplace/redhat-operators-hcv5z" Nov 25 11:49:06 crc kubenswrapper[4706]: I1125 11:49:06.998952 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zld5\" (UniqueName: \"kubernetes.io/projected/3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9-kube-api-access-4zld5\") pod \"redhat-operators-hcv5z\" (UID: \"3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9\") " pod="openshift-marketplace/redhat-operators-hcv5z" Nov 25 11:49:07 crc kubenswrapper[4706]: I1125 11:49:07.077648 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hcv5z" Nov 25 11:49:07 crc kubenswrapper[4706]: I1125 11:49:07.246127 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc" Nov 25 11:49:07 crc kubenswrapper[4706]: I1125 11:49:07.379250 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/05fa0078-a8e0-4b75-a7a8-d5ec5f21e034-bundle\") pod \"05fa0078-a8e0-4b75-a7a8-d5ec5f21e034\" (UID: \"05fa0078-a8e0-4b75-a7a8-d5ec5f21e034\") " Nov 25 11:49:07 crc kubenswrapper[4706]: I1125 11:49:07.379346 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/05fa0078-a8e0-4b75-a7a8-d5ec5f21e034-util\") pod \"05fa0078-a8e0-4b75-a7a8-d5ec5f21e034\" (UID: \"05fa0078-a8e0-4b75-a7a8-d5ec5f21e034\") " Nov 25 11:49:07 crc kubenswrapper[4706]: I1125 11:49:07.379377 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5xl2\" (UniqueName: \"kubernetes.io/projected/05fa0078-a8e0-4b75-a7a8-d5ec5f21e034-kube-api-access-t5xl2\") pod \"05fa0078-a8e0-4b75-a7a8-d5ec5f21e034\" (UID: \"05fa0078-a8e0-4b75-a7a8-d5ec5f21e034\") " Nov 25 11:49:07 crc kubenswrapper[4706]: I1125 11:49:07.380905 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05fa0078-a8e0-4b75-a7a8-d5ec5f21e034-bundle" (OuterVolumeSpecName: "bundle") pod "05fa0078-a8e0-4b75-a7a8-d5ec5f21e034" (UID: "05fa0078-a8e0-4b75-a7a8-d5ec5f21e034"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:49:07 crc kubenswrapper[4706]: I1125 11:49:07.384960 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05fa0078-a8e0-4b75-a7a8-d5ec5f21e034-kube-api-access-t5xl2" (OuterVolumeSpecName: "kube-api-access-t5xl2") pod "05fa0078-a8e0-4b75-a7a8-d5ec5f21e034" (UID: "05fa0078-a8e0-4b75-a7a8-d5ec5f21e034"). InnerVolumeSpecName "kube-api-access-t5xl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:49:07 crc kubenswrapper[4706]: I1125 11:49:07.392257 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05fa0078-a8e0-4b75-a7a8-d5ec5f21e034-util" (OuterVolumeSpecName: "util") pod "05fa0078-a8e0-4b75-a7a8-d5ec5f21e034" (UID: "05fa0078-a8e0-4b75-a7a8-d5ec5f21e034"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:49:07 crc kubenswrapper[4706]: I1125 11:49:07.481716 4706 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/05fa0078-a8e0-4b75-a7a8-d5ec5f21e034-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:49:07 crc kubenswrapper[4706]: I1125 11:49:07.481784 4706 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/05fa0078-a8e0-4b75-a7a8-d5ec5f21e034-util\") on node \"crc\" DevicePath \"\"" Nov 25 11:49:07 crc kubenswrapper[4706]: I1125 11:49:07.481799 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5xl2\" (UniqueName: \"kubernetes.io/projected/05fa0078-a8e0-4b75-a7a8-d5ec5f21e034-kube-api-access-t5xl2\") on node \"crc\" DevicePath \"\"" Nov 25 11:49:07 crc kubenswrapper[4706]: I1125 11:49:07.559260 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hcv5z"] Nov 25 11:49:07 crc kubenswrapper[4706]: W1125 11:49:07.568692 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e0ba231_93b2_4bf1_9d67_66b3f2ee62b9.slice/crio-22dd8598a2bf9ae5f2bc4da24bb5ae0867384e731bc8741dde98c299869b903c WatchSource:0}: Error finding container 22dd8598a2bf9ae5f2bc4da24bb5ae0867384e731bc8741dde98c299869b903c: Status 404 returned error can't find the container with id 22dd8598a2bf9ae5f2bc4da24bb5ae0867384e731bc8741dde98c299869b903c Nov 25 11:49:07 crc kubenswrapper[4706]: I1125 11:49:07.683956 4706 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 25 11:49:07 crc kubenswrapper[4706]: I1125 11:49:07.854077 4706 generic.go:334] "Generic (PLEG): container finished" podID="3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9" containerID="42708b073166d392c4cb599d0e1fc8797d5cb08425893d4617aa76d553b16c7f" exitCode=0 Nov 25 11:49:07 crc kubenswrapper[4706]: I1125 11:49:07.854166 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcv5z" event={"ID":"3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9","Type":"ContainerDied","Data":"42708b073166d392c4cb599d0e1fc8797d5cb08425893d4617aa76d553b16c7f"} Nov 25 11:49:07 crc kubenswrapper[4706]: I1125 11:49:07.854199 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcv5z" event={"ID":"3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9","Type":"ContainerStarted","Data":"22dd8598a2bf9ae5f2bc4da24bb5ae0867384e731bc8741dde98c299869b903c"} Nov 25 11:49:07 crc kubenswrapper[4706]: I1125 11:49:07.859110 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc" event={"ID":"05fa0078-a8e0-4b75-a7a8-d5ec5f21e034","Type":"ContainerDied","Data":"2fc574de6288a711e64ac7885d55e0012d44d195f078ce44f6a7d644156e3373"} Nov 25 11:49:07 crc kubenswrapper[4706]: I1125 11:49:07.859159 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fc574de6288a711e64ac7885d55e0012d44d195f078ce44f6a7d644156e3373" Nov 25 11:49:07 crc kubenswrapper[4706]: I1125 11:49:07.859218 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc" Nov 25 11:49:09 crc kubenswrapper[4706]: I1125 11:49:09.831277 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-4wx96"] Nov 25 11:49:09 crc kubenswrapper[4706]: E1125 11:49:09.831939 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05fa0078-a8e0-4b75-a7a8-d5ec5f21e034" containerName="extract" Nov 25 11:49:09 crc kubenswrapper[4706]: I1125 11:49:09.831959 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="05fa0078-a8e0-4b75-a7a8-d5ec5f21e034" containerName="extract" Nov 25 11:49:09 crc kubenswrapper[4706]: E1125 11:49:09.831977 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05fa0078-a8e0-4b75-a7a8-d5ec5f21e034" containerName="util" Nov 25 11:49:09 crc kubenswrapper[4706]: I1125 11:49:09.831983 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="05fa0078-a8e0-4b75-a7a8-d5ec5f21e034" containerName="util" Nov 25 11:49:09 crc kubenswrapper[4706]: E1125 11:49:09.831991 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05fa0078-a8e0-4b75-a7a8-d5ec5f21e034" containerName="pull" Nov 25 11:49:09 crc kubenswrapper[4706]: I1125 11:49:09.831996 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="05fa0078-a8e0-4b75-a7a8-d5ec5f21e034" containerName="pull" Nov 25 11:49:09 crc kubenswrapper[4706]: I1125 11:49:09.832086 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="05fa0078-a8e0-4b75-a7a8-d5ec5f21e034" containerName="extract" Nov 25 11:49:09 crc kubenswrapper[4706]: I1125 11:49:09.832490 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-4wx96" Nov 25 11:49:09 crc kubenswrapper[4706]: I1125 11:49:09.839476 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 25 11:49:09 crc kubenswrapper[4706]: I1125 11:49:09.839724 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 25 11:49:09 crc kubenswrapper[4706]: I1125 11:49:09.839779 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-675cp" Nov 25 11:49:09 crc kubenswrapper[4706]: I1125 11:49:09.848484 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-4wx96"] Nov 25 11:49:09 crc kubenswrapper[4706]: I1125 11:49:09.917674 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjqb8\" (UniqueName: \"kubernetes.io/projected/e4a0ddea-a6b5-456d-9243-3a7576fcdac8-kube-api-access-gjqb8\") pod \"nmstate-operator-557fdffb88-4wx96\" (UID: \"e4a0ddea-a6b5-456d-9243-3a7576fcdac8\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-4wx96" Nov 25 11:49:10 crc kubenswrapper[4706]: I1125 11:49:10.019735 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjqb8\" (UniqueName: \"kubernetes.io/projected/e4a0ddea-a6b5-456d-9243-3a7576fcdac8-kube-api-access-gjqb8\") pod \"nmstate-operator-557fdffb88-4wx96\" (UID: \"e4a0ddea-a6b5-456d-9243-3a7576fcdac8\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-4wx96" Nov 25 11:49:10 crc kubenswrapper[4706]: I1125 11:49:10.042104 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjqb8\" (UniqueName: \"kubernetes.io/projected/e4a0ddea-a6b5-456d-9243-3a7576fcdac8-kube-api-access-gjqb8\") pod \"nmstate-operator-557fdffb88-4wx96\" (UID: \"e4a0ddea-a6b5-456d-9243-3a7576fcdac8\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-4wx96" Nov 25 11:49:10 crc kubenswrapper[4706]: I1125 11:49:10.180423 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-4wx96" Nov 25 11:49:10 crc kubenswrapper[4706]: I1125 11:49:10.676606 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-4wx96"] Nov 25 11:49:10 crc kubenswrapper[4706]: W1125 11:49:10.692785 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4a0ddea_a6b5_456d_9243_3a7576fcdac8.slice/crio-945294526b1a2956496d6934db2c3909dbaf6c163160b03f58f29d0a27bd93c4 WatchSource:0}: Error finding container 945294526b1a2956496d6934db2c3909dbaf6c163160b03f58f29d0a27bd93c4: Status 404 returned error can't find the container with id 945294526b1a2956496d6934db2c3909dbaf6c163160b03f58f29d0a27bd93c4 Nov 25 11:49:10 crc kubenswrapper[4706]: I1125 11:49:10.900020 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-4wx96" event={"ID":"e4a0ddea-a6b5-456d-9243-3a7576fcdac8","Type":"ContainerStarted","Data":"945294526b1a2956496d6934db2c3909dbaf6c163160b03f58f29d0a27bd93c4"} Nov 25 11:49:17 crc kubenswrapper[4706]: I1125 11:49:17.947119 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcv5z" event={"ID":"3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9","Type":"ContainerStarted","Data":"32c25e6c0726b3f6982d4d0c6e9052d2f73369c2c0aaddeb1b3fdbdb658d57d4"} Nov 25 11:49:17 crc kubenswrapper[4706]: I1125 11:49:17.949859 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-4wx96" event={"ID":"e4a0ddea-a6b5-456d-9243-3a7576fcdac8","Type":"ContainerStarted","Data":"071f527869eb67738717bae91d645ffb72ca34d24b98687e474b676cceb1bfdd"} Nov 25 11:49:18 crc kubenswrapper[4706]: I1125 11:49:18.957155 4706 generic.go:334] "Generic (PLEG): container finished" podID="3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9" containerID="32c25e6c0726b3f6982d4d0c6e9052d2f73369c2c0aaddeb1b3fdbdb658d57d4" exitCode=0 Nov 25 11:49:18 crc kubenswrapper[4706]: I1125 11:49:18.957234 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcv5z" event={"ID":"3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9","Type":"ContainerDied","Data":"32c25e6c0726b3f6982d4d0c6e9052d2f73369c2c0aaddeb1b3fdbdb658d57d4"} Nov 25 11:49:18 crc kubenswrapper[4706]: I1125 11:49:18.984198 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-4wx96" podStartSLOduration=3.343140857 podStartE2EDuration="9.984172826s" podCreationTimestamp="2025-11-25 11:49:09 +0000 UTC" firstStartedPulling="2025-11-25 11:49:10.697608106 +0000 UTC m=+759.612165487" lastFinishedPulling="2025-11-25 11:49:17.338640075 +0000 UTC m=+766.253197456" observedRunningTime="2025-11-25 11:49:17.997594361 +0000 UTC m=+766.912151772" watchObservedRunningTime="2025-11-25 11:49:18.984172826 +0000 UTC m=+767.898730207" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.038495 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-rd4nq"] Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.042572 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-rd4nq" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.044726 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-gsml7" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.050526 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-k7vl7"] Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.052981 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-k7vl7" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.054031 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-rd4nq"] Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.055587 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.089813 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-k7vl7"] Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.113386 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-qkksf"] Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.114920 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-qkksf" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.158422 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgs92\" (UniqueName: \"kubernetes.io/projected/9220b323-ff51-4a2d-95fc-dc3274e8fbeb-kube-api-access-rgs92\") pod \"nmstate-webhook-6b89b748d8-k7vl7\" (UID: \"9220b323-ff51-4a2d-95fc-dc3274e8fbeb\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-k7vl7" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.158483 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l4zq\" (UniqueName: \"kubernetes.io/projected/2454859f-90ab-4942-a300-36e465597289-kube-api-access-5l4zq\") pod \"nmstate-handler-qkksf\" (UID: \"2454859f-90ab-4942-a300-36e465597289\") " pod="openshift-nmstate/nmstate-handler-qkksf" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.158603 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/2454859f-90ab-4942-a300-36e465597289-dbus-socket\") pod \"nmstate-handler-qkksf\" (UID: \"2454859f-90ab-4942-a300-36e465597289\") " pod="openshift-nmstate/nmstate-handler-qkksf" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.158867 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5drgz\" (UniqueName: \"kubernetes.io/projected/a206555f-6ea8-4dbc-83db-801c57226c13-kube-api-access-5drgz\") pod \"nmstate-metrics-5dcf9c57c5-rd4nq\" (UID: \"a206555f-6ea8-4dbc-83db-801c57226c13\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-rd4nq" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.159143 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/2454859f-90ab-4942-a300-36e465597289-ovs-socket\") pod \"nmstate-handler-qkksf\" (UID: \"2454859f-90ab-4942-a300-36e465597289\") " pod="openshift-nmstate/nmstate-handler-qkksf" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.159272 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9220b323-ff51-4a2d-95fc-dc3274e8fbeb-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-k7vl7\" (UID: \"9220b323-ff51-4a2d-95fc-dc3274e8fbeb\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-k7vl7" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.159349 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/2454859f-90ab-4942-a300-36e465597289-nmstate-lock\") pod \"nmstate-handler-qkksf\" (UID: \"2454859f-90ab-4942-a300-36e465597289\") " pod="openshift-nmstate/nmstate-handler-qkksf" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.219539 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4k4ff"] Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.220526 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4k4ff" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.223077 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.223434 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-z7mtb" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.224503 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.237360 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4k4ff"] Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.261400 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5drgz\" (UniqueName: \"kubernetes.io/projected/a206555f-6ea8-4dbc-83db-801c57226c13-kube-api-access-5drgz\") pod \"nmstate-metrics-5dcf9c57c5-rd4nq\" (UID: \"a206555f-6ea8-4dbc-83db-801c57226c13\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-rd4nq" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.261498 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/2454859f-90ab-4942-a300-36e465597289-ovs-socket\") pod \"nmstate-handler-qkksf\" (UID: \"2454859f-90ab-4942-a300-36e465597289\") " pod="openshift-nmstate/nmstate-handler-qkksf" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.261538 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9220b323-ff51-4a2d-95fc-dc3274e8fbeb-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-k7vl7\" (UID: \"9220b323-ff51-4a2d-95fc-dc3274e8fbeb\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-k7vl7" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.261564 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/2454859f-90ab-4942-a300-36e465597289-nmstate-lock\") pod \"nmstate-handler-qkksf\" (UID: \"2454859f-90ab-4942-a300-36e465597289\") " pod="openshift-nmstate/nmstate-handler-qkksf" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.261589 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgs92\" (UniqueName: \"kubernetes.io/projected/9220b323-ff51-4a2d-95fc-dc3274e8fbeb-kube-api-access-rgs92\") pod \"nmstate-webhook-6b89b748d8-k7vl7\" (UID: \"9220b323-ff51-4a2d-95fc-dc3274e8fbeb\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-k7vl7" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.261615 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l4zq\" (UniqueName: \"kubernetes.io/projected/2454859f-90ab-4942-a300-36e465597289-kube-api-access-5l4zq\") pod \"nmstate-handler-qkksf\" (UID: \"2454859f-90ab-4942-a300-36e465597289\") " pod="openshift-nmstate/nmstate-handler-qkksf" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.261640 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/2454859f-90ab-4942-a300-36e465597289-dbus-socket\") pod \"nmstate-handler-qkksf\" (UID: \"2454859f-90ab-4942-a300-36e465597289\") " pod="openshift-nmstate/nmstate-handler-qkksf" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.262039 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/2454859f-90ab-4942-a300-36e465597289-dbus-socket\") pod \"nmstate-handler-qkksf\" (UID: \"2454859f-90ab-4942-a300-36e465597289\") " pod="openshift-nmstate/nmstate-handler-qkksf" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.262094 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/2454859f-90ab-4942-a300-36e465597289-nmstate-lock\") pod \"nmstate-handler-qkksf\" (UID: \"2454859f-90ab-4942-a300-36e465597289\") " pod="openshift-nmstate/nmstate-handler-qkksf" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.262116 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/2454859f-90ab-4942-a300-36e465597289-ovs-socket\") pod \"nmstate-handler-qkksf\" (UID: \"2454859f-90ab-4942-a300-36e465597289\") " pod="openshift-nmstate/nmstate-handler-qkksf" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.271554 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9220b323-ff51-4a2d-95fc-dc3274e8fbeb-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-k7vl7\" (UID: \"9220b323-ff51-4a2d-95fc-dc3274e8fbeb\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-k7vl7" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.281227 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5drgz\" (UniqueName: \"kubernetes.io/projected/a206555f-6ea8-4dbc-83db-801c57226c13-kube-api-access-5drgz\") pod \"nmstate-metrics-5dcf9c57c5-rd4nq\" (UID: \"a206555f-6ea8-4dbc-83db-801c57226c13\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-rd4nq" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.283099 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l4zq\" (UniqueName: \"kubernetes.io/projected/2454859f-90ab-4942-a300-36e465597289-kube-api-access-5l4zq\") pod \"nmstate-handler-qkksf\" (UID: \"2454859f-90ab-4942-a300-36e465597289\") " pod="openshift-nmstate/nmstate-handler-qkksf" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.284111 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgs92\" (UniqueName: \"kubernetes.io/projected/9220b323-ff51-4a2d-95fc-dc3274e8fbeb-kube-api-access-rgs92\") pod \"nmstate-webhook-6b89b748d8-k7vl7\" (UID: \"9220b323-ff51-4a2d-95fc-dc3274e8fbeb\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-k7vl7" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.361496 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-rd4nq" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.363062 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfhtz\" (UniqueName: \"kubernetes.io/projected/502cb16b-4f8d-47ba-96a0-41e42768fe63-kube-api-access-zfhtz\") pod \"nmstate-console-plugin-5874bd7bc5-4k4ff\" (UID: \"502cb16b-4f8d-47ba-96a0-41e42768fe63\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4k4ff" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.363153 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/502cb16b-4f8d-47ba-96a0-41e42768fe63-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-4k4ff\" (UID: \"502cb16b-4f8d-47ba-96a0-41e42768fe63\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4k4ff" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.363253 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/502cb16b-4f8d-47ba-96a0-41e42768fe63-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-4k4ff\" (UID: \"502cb16b-4f8d-47ba-96a0-41e42768fe63\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4k4ff" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.378898 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-k7vl7" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.427895 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6b648576cb-skcws"] Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.428703 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b648576cb-skcws" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.437101 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-qkksf" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.442571 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b648576cb-skcws"] Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.466273 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/502cb16b-4f8d-47ba-96a0-41e42768fe63-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-4k4ff\" (UID: \"502cb16b-4f8d-47ba-96a0-41e42768fe63\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4k4ff" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.466387 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfhtz\" (UniqueName: \"kubernetes.io/projected/502cb16b-4f8d-47ba-96a0-41e42768fe63-kube-api-access-zfhtz\") pod \"nmstate-console-plugin-5874bd7bc5-4k4ff\" (UID: \"502cb16b-4f8d-47ba-96a0-41e42768fe63\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4k4ff" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.466415 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/502cb16b-4f8d-47ba-96a0-41e42768fe63-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-4k4ff\" (UID: \"502cb16b-4f8d-47ba-96a0-41e42768fe63\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4k4ff" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.469073 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/502cb16b-4f8d-47ba-96a0-41e42768fe63-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-4k4ff\" (UID: \"502cb16b-4f8d-47ba-96a0-41e42768fe63\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4k4ff" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.469988 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/502cb16b-4f8d-47ba-96a0-41e42768fe63-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-4k4ff\" (UID: \"502cb16b-4f8d-47ba-96a0-41e42768fe63\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4k4ff" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.494251 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfhtz\" (UniqueName: \"kubernetes.io/projected/502cb16b-4f8d-47ba-96a0-41e42768fe63-kube-api-access-zfhtz\") pod \"nmstate-console-plugin-5874bd7bc5-4k4ff\" (UID: \"502cb16b-4f8d-47ba-96a0-41e42768fe63\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4k4ff" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.545984 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4k4ff" Nov 25 11:49:19 crc kubenswrapper[4706]: W1125 11:49:19.548098 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2454859f_90ab_4942_a300_36e465597289.slice/crio-eb3a5d92ec837ef86b21274ddebdedaa6328c866a1a1b27c7fb712d0bbd3c017 WatchSource:0}: Error finding container eb3a5d92ec837ef86b21274ddebdedaa6328c866a1a1b27c7fb712d0bbd3c017: Status 404 returned error can't find the container with id eb3a5d92ec837ef86b21274ddebdedaa6328c866a1a1b27c7fb712d0bbd3c017 Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.567059 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f035406e-8c17-4737-a8ac-439434e244e5-console-oauth-config\") pod \"console-6b648576cb-skcws\" (UID: \"f035406e-8c17-4737-a8ac-439434e244e5\") " pod="openshift-console/console-6b648576cb-skcws" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.567149 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f035406e-8c17-4737-a8ac-439434e244e5-oauth-serving-cert\") pod \"console-6b648576cb-skcws\" (UID: \"f035406e-8c17-4737-a8ac-439434e244e5\") " pod="openshift-console/console-6b648576cb-skcws" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.567207 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f035406e-8c17-4737-a8ac-439434e244e5-trusted-ca-bundle\") pod \"console-6b648576cb-skcws\" (UID: \"f035406e-8c17-4737-a8ac-439434e244e5\") " pod="openshift-console/console-6b648576cb-skcws" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.567237 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f035406e-8c17-4737-a8ac-439434e244e5-console-serving-cert\") pod \"console-6b648576cb-skcws\" (UID: \"f035406e-8c17-4737-a8ac-439434e244e5\") " pod="openshift-console/console-6b648576cb-skcws" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.567256 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f035406e-8c17-4737-a8ac-439434e244e5-service-ca\") pod \"console-6b648576cb-skcws\" (UID: \"f035406e-8c17-4737-a8ac-439434e244e5\") " pod="openshift-console/console-6b648576cb-skcws" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.567337 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s68xb\" (UniqueName: \"kubernetes.io/projected/f035406e-8c17-4737-a8ac-439434e244e5-kube-api-access-s68xb\") pod \"console-6b648576cb-skcws\" (UID: \"f035406e-8c17-4737-a8ac-439434e244e5\") " pod="openshift-console/console-6b648576cb-skcws" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.567363 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f035406e-8c17-4737-a8ac-439434e244e5-console-config\") pod \"console-6b648576cb-skcws\" (UID: \"f035406e-8c17-4737-a8ac-439434e244e5\") " pod="openshift-console/console-6b648576cb-skcws" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.668473 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f035406e-8c17-4737-a8ac-439434e244e5-oauth-serving-cert\") pod \"console-6b648576cb-skcws\" (UID: \"f035406e-8c17-4737-a8ac-439434e244e5\") " pod="openshift-console/console-6b648576cb-skcws" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.668943 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f035406e-8c17-4737-a8ac-439434e244e5-trusted-ca-bundle\") pod \"console-6b648576cb-skcws\" (UID: \"f035406e-8c17-4737-a8ac-439434e244e5\") " pod="openshift-console/console-6b648576cb-skcws" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.668986 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f035406e-8c17-4737-a8ac-439434e244e5-console-serving-cert\") pod \"console-6b648576cb-skcws\" (UID: \"f035406e-8c17-4737-a8ac-439434e244e5\") " pod="openshift-console/console-6b648576cb-skcws" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.669009 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f035406e-8c17-4737-a8ac-439434e244e5-service-ca\") pod \"console-6b648576cb-skcws\" (UID: \"f035406e-8c17-4737-a8ac-439434e244e5\") " pod="openshift-console/console-6b648576cb-skcws" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.669046 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s68xb\" (UniqueName: \"kubernetes.io/projected/f035406e-8c17-4737-a8ac-439434e244e5-kube-api-access-s68xb\") pod \"console-6b648576cb-skcws\" (UID: \"f035406e-8c17-4737-a8ac-439434e244e5\") " pod="openshift-console/console-6b648576cb-skcws" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.669084 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f035406e-8c17-4737-a8ac-439434e244e5-console-config\") pod \"console-6b648576cb-skcws\" (UID: \"f035406e-8c17-4737-a8ac-439434e244e5\") " pod="openshift-console/console-6b648576cb-skcws" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.669136 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f035406e-8c17-4737-a8ac-439434e244e5-console-oauth-config\") pod \"console-6b648576cb-skcws\" (UID: \"f035406e-8c17-4737-a8ac-439434e244e5\") " pod="openshift-console/console-6b648576cb-skcws" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.671120 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f035406e-8c17-4737-a8ac-439434e244e5-oauth-serving-cert\") pod \"console-6b648576cb-skcws\" (UID: \"f035406e-8c17-4737-a8ac-439434e244e5\") " pod="openshift-console/console-6b648576cb-skcws" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.671173 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f035406e-8c17-4737-a8ac-439434e244e5-service-ca\") pod \"console-6b648576cb-skcws\" (UID: \"f035406e-8c17-4737-a8ac-439434e244e5\") " pod="openshift-console/console-6b648576cb-skcws" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.671718 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f035406e-8c17-4737-a8ac-439434e244e5-console-config\") pod \"console-6b648576cb-skcws\" (UID: \"f035406e-8c17-4737-a8ac-439434e244e5\") " pod="openshift-console/console-6b648576cb-skcws" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.672933 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f035406e-8c17-4737-a8ac-439434e244e5-trusted-ca-bundle\") pod \"console-6b648576cb-skcws\" (UID: \"f035406e-8c17-4737-a8ac-439434e244e5\") " pod="openshift-console/console-6b648576cb-skcws" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.682196 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f035406e-8c17-4737-a8ac-439434e244e5-console-serving-cert\") pod \"console-6b648576cb-skcws\" (UID: \"f035406e-8c17-4737-a8ac-439434e244e5\") " pod="openshift-console/console-6b648576cb-skcws" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.686954 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f035406e-8c17-4737-a8ac-439434e244e5-console-oauth-config\") pod \"console-6b648576cb-skcws\" (UID: \"f035406e-8c17-4737-a8ac-439434e244e5\") " pod="openshift-console/console-6b648576cb-skcws" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.697880 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s68xb\" (UniqueName: \"kubernetes.io/projected/f035406e-8c17-4737-a8ac-439434e244e5-kube-api-access-s68xb\") pod \"console-6b648576cb-skcws\" (UID: \"f035406e-8c17-4737-a8ac-439434e244e5\") " pod="openshift-console/console-6b648576cb-skcws" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.782532 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b648576cb-skcws" Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.898602 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-rd4nq"] Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.966955 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-rd4nq" event={"ID":"a206555f-6ea8-4dbc-83db-801c57226c13","Type":"ContainerStarted","Data":"bd94c1692b2e3c565caae5e40f1d6dc2830bbc2efdca34ddfdb965a92929fffb"} Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.968505 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-qkksf" event={"ID":"2454859f-90ab-4942-a300-36e465597289","Type":"ContainerStarted","Data":"eb3a5d92ec837ef86b21274ddebdedaa6328c866a1a1b27c7fb712d0bbd3c017"} Nov 25 11:49:19 crc kubenswrapper[4706]: I1125 11:49:19.978211 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcv5z" event={"ID":"3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9","Type":"ContainerStarted","Data":"3ed95bd02aea09904ee219613db5e41b37015c7881227419d3e11d439aa06b9b"} Nov 25 11:49:20 crc kubenswrapper[4706]: I1125 11:49:20.005409 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-k7vl7"] Nov 25 11:49:20 crc kubenswrapper[4706]: I1125 11:49:20.008663 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hcv5z" podStartSLOduration=2.42163011 podStartE2EDuration="14.008632597s" podCreationTimestamp="2025-11-25 11:49:06 +0000 UTC" firstStartedPulling="2025-11-25 11:49:07.855819902 +0000 UTC m=+756.770377283" lastFinishedPulling="2025-11-25 11:49:19.442822389 +0000 UTC m=+768.357379770" observedRunningTime="2025-11-25 11:49:20.003179272 +0000 UTC m=+768.917736653" watchObservedRunningTime="2025-11-25 11:49:20.008632597 +0000 UTC m=+768.923189978" Nov 25 11:49:20 crc kubenswrapper[4706]: I1125 11:49:20.149720 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4k4ff"] Nov 25 11:49:20 crc kubenswrapper[4706]: W1125 11:49:20.154236 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod502cb16b_4f8d_47ba_96a0_41e42768fe63.slice/crio-f72a15f2ba278ab9e261792ea4acdd0a0fe94142efcf7bb32ee87b45626bf8ab WatchSource:0}: Error finding container f72a15f2ba278ab9e261792ea4acdd0a0fe94142efcf7bb32ee87b45626bf8ab: Status 404 returned error can't find the container with id f72a15f2ba278ab9e261792ea4acdd0a0fe94142efcf7bb32ee87b45626bf8ab Nov 25 11:49:20 crc kubenswrapper[4706]: I1125 11:49:20.272720 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b648576cb-skcws"] Nov 25 11:49:20 crc kubenswrapper[4706]: W1125 11:49:20.287631 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf035406e_8c17_4737_a8ac_439434e244e5.slice/crio-02501c6f241151f42eb896e18d188c74de256b144d6152d56fcfc2b6c028550d WatchSource:0}: Error finding container 02501c6f241151f42eb896e18d188c74de256b144d6152d56fcfc2b6c028550d: Status 404 returned error can't find the container with id 02501c6f241151f42eb896e18d188c74de256b144d6152d56fcfc2b6c028550d Nov 25 11:49:20 crc kubenswrapper[4706]: I1125 11:49:20.990406 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b648576cb-skcws" event={"ID":"f035406e-8c17-4737-a8ac-439434e244e5","Type":"ContainerStarted","Data":"77418dfce7a3c258ea135917f7713b73ed631e8288af8af973d5fe78397d6f25"} Nov 25 11:49:20 crc kubenswrapper[4706]: I1125 11:49:20.991057 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b648576cb-skcws" event={"ID":"f035406e-8c17-4737-a8ac-439434e244e5","Type":"ContainerStarted","Data":"02501c6f241151f42eb896e18d188c74de256b144d6152d56fcfc2b6c028550d"} Nov 25 11:49:20 crc kubenswrapper[4706]: I1125 11:49:20.996197 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4k4ff" event={"ID":"502cb16b-4f8d-47ba-96a0-41e42768fe63","Type":"ContainerStarted","Data":"f72a15f2ba278ab9e261792ea4acdd0a0fe94142efcf7bb32ee87b45626bf8ab"} Nov 25 11:49:20 crc kubenswrapper[4706]: I1125 11:49:20.999094 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-k7vl7" event={"ID":"9220b323-ff51-4a2d-95fc-dc3274e8fbeb","Type":"ContainerStarted","Data":"19cac32e5c3af280ce114a74a29d79fee3f3a4990a38e114de7c33868ec9b903"} Nov 25 11:49:21 crc kubenswrapper[4706]: I1125 11:49:21.016066 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6b648576cb-skcws" podStartSLOduration=2.01604318 podStartE2EDuration="2.01604318s" podCreationTimestamp="2025-11-25 11:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:49:21.010965713 +0000 UTC m=+769.925523104" watchObservedRunningTime="2025-11-25 11:49:21.01604318 +0000 UTC m=+769.930600561" Nov 25 11:49:27 crc kubenswrapper[4706]: I1125 11:49:27.077881 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hcv5z" Nov 25 11:49:27 crc kubenswrapper[4706]: I1125 11:49:27.078630 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hcv5z" Nov 25 11:49:27 crc kubenswrapper[4706]: I1125 11:49:27.128521 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hcv5z" Nov 25 11:49:28 crc kubenswrapper[4706]: I1125 11:49:28.048811 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4k4ff" event={"ID":"502cb16b-4f8d-47ba-96a0-41e42768fe63","Type":"ContainerStarted","Data":"06f5c5171209ee53bbb579215dc11babcd64242700fdf1dc5fc6b3c4a27733f3"} Nov 25 11:49:28 crc kubenswrapper[4706]: I1125 11:49:28.052379 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-k7vl7" event={"ID":"9220b323-ff51-4a2d-95fc-dc3274e8fbeb","Type":"ContainerStarted","Data":"0bc339e93c2691290e61c64570b75bd17d4365c3476177ceba43a7064c00a5f9"} Nov 25 11:49:28 crc kubenswrapper[4706]: I1125 11:49:28.052617 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-k7vl7" Nov 25 11:49:28 crc kubenswrapper[4706]: I1125 11:49:28.054617 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-rd4nq" event={"ID":"a206555f-6ea8-4dbc-83db-801c57226c13","Type":"ContainerStarted","Data":"7ddfd6928d2415aaa9aab72be4e24a917e3bff6ef9e53eaedb52bb0c84d6ec50"} Nov 25 11:49:28 crc kubenswrapper[4706]: I1125 11:49:28.056485 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-qkksf" event={"ID":"2454859f-90ab-4942-a300-36e465597289","Type":"ContainerStarted","Data":"bd033af6ca03fd540d7516da8872d423b17fd084f2af4c6bc87751c925925d47"} Nov 25 11:49:28 crc kubenswrapper[4706]: I1125 11:49:28.071441 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4k4ff" podStartSLOduration=2.434199852 podStartE2EDuration="9.071409212s" podCreationTimestamp="2025-11-25 11:49:19 +0000 UTC" firstStartedPulling="2025-11-25 11:49:20.158542769 +0000 UTC m=+769.073100150" lastFinishedPulling="2025-11-25 11:49:26.795752129 +0000 UTC m=+775.710309510" observedRunningTime="2025-11-25 11:49:28.065772274 +0000 UTC m=+776.980329735" watchObservedRunningTime="2025-11-25 11:49:28.071409212 +0000 UTC m=+776.985966603" Nov 25 11:49:28 crc kubenswrapper[4706]: I1125 11:49:28.092605 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-qkksf" podStartSLOduration=1.8696601309999998 podStartE2EDuration="9.092580767s" podCreationTimestamp="2025-11-25 11:49:19 +0000 UTC" firstStartedPulling="2025-11-25 11:49:19.572848483 +0000 UTC m=+768.487405874" lastFinishedPulling="2025-11-25 11:49:26.795769119 +0000 UTC m=+775.710326510" observedRunningTime="2025-11-25 11:49:28.089101794 +0000 UTC m=+777.003659175" watchObservedRunningTime="2025-11-25 11:49:28.092580767 +0000 UTC m=+777.007138148" Nov 25 11:49:28 crc kubenswrapper[4706]: I1125 11:49:28.110540 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-k7vl7" podStartSLOduration=2.313598946 podStartE2EDuration="9.110516604s" podCreationTimestamp="2025-11-25 11:49:19 +0000 UTC" firstStartedPulling="2025-11-25 11:49:20.010510976 +0000 UTC m=+768.925068357" lastFinishedPulling="2025-11-25 11:49:26.807428634 +0000 UTC m=+775.721986015" observedRunningTime="2025-11-25 11:49:28.107514191 +0000 UTC m=+777.022071572" watchObservedRunningTime="2025-11-25 11:49:28.110516604 +0000 UTC m=+777.025073985" Nov 25 11:49:28 crc kubenswrapper[4706]: I1125 11:49:28.118887 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hcv5z" Nov 25 11:49:28 crc kubenswrapper[4706]: I1125 11:49:28.202932 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hcv5z"] Nov 25 11:49:28 crc kubenswrapper[4706]: I1125 11:49:28.251065 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-942d2"] Nov 25 11:49:28 crc kubenswrapper[4706]: I1125 11:49:28.251448 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-942d2" podUID="35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49" containerName="registry-server" containerID="cri-o://e133ff4c9a278dd34918625a1aca782c284818404f5841b1037dca0777466304" gracePeriod=2 Nov 25 11:49:28 crc kubenswrapper[4706]: I1125 11:49:28.847510 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-942d2" Nov 25 11:49:28 crc kubenswrapper[4706]: I1125 11:49:28.938725 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49-catalog-content\") pod \"35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49\" (UID: \"35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49\") " Nov 25 11:49:28 crc kubenswrapper[4706]: I1125 11:49:28.938796 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49-utilities\") pod \"35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49\" (UID: \"35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49\") " Nov 25 11:49:28 crc kubenswrapper[4706]: I1125 11:49:28.938841 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrr4s\" (UniqueName: \"kubernetes.io/projected/35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49-kube-api-access-hrr4s\") pod \"35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49\" (UID: \"35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49\") " Nov 25 11:49:28 crc kubenswrapper[4706]: I1125 11:49:28.939970 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49-utilities" (OuterVolumeSpecName: "utilities") pod "35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49" (UID: "35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:49:28 crc kubenswrapper[4706]: I1125 11:49:28.966592 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49-kube-api-access-hrr4s" (OuterVolumeSpecName: "kube-api-access-hrr4s") pod "35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49" (UID: "35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49"). InnerVolumeSpecName "kube-api-access-hrr4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:49:29 crc kubenswrapper[4706]: I1125 11:49:29.037077 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49" (UID: "35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:49:29 crc kubenswrapper[4706]: I1125 11:49:29.041484 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 11:49:29 crc kubenswrapper[4706]: I1125 11:49:29.041524 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 11:49:29 crc kubenswrapper[4706]: I1125 11:49:29.041537 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrr4s\" (UniqueName: \"kubernetes.io/projected/35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49-kube-api-access-hrr4s\") on node \"crc\" DevicePath \"\"" Nov 25 11:49:29 crc kubenswrapper[4706]: I1125 11:49:29.069576 4706 generic.go:334] "Generic (PLEG): container finished" podID="35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49" containerID="e133ff4c9a278dd34918625a1aca782c284818404f5841b1037dca0777466304" exitCode=0 Nov 25 11:49:29 crc kubenswrapper[4706]: I1125 11:49:29.069724 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-942d2" Nov 25 11:49:29 crc kubenswrapper[4706]: I1125 11:49:29.069634 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-942d2" event={"ID":"35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49","Type":"ContainerDied","Data":"e133ff4c9a278dd34918625a1aca782c284818404f5841b1037dca0777466304"} Nov 25 11:49:29 crc kubenswrapper[4706]: I1125 11:49:29.069818 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-942d2" event={"ID":"35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49","Type":"ContainerDied","Data":"ca269415ac5d0dda76bd0c7102e4a0f44004d4516854177ee0f77c4b04006b1b"} Nov 25 11:49:29 crc kubenswrapper[4706]: I1125 11:49:29.069855 4706 scope.go:117] "RemoveContainer" containerID="e133ff4c9a278dd34918625a1aca782c284818404f5841b1037dca0777466304" Nov 25 11:49:29 crc kubenswrapper[4706]: I1125 11:49:29.071763 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-qkksf" Nov 25 11:49:29 crc kubenswrapper[4706]: I1125 11:49:29.098449 4706 scope.go:117] "RemoveContainer" containerID="d2276bdce9a2332424fbe4c644b9174b3576145aa2defe52212632625b5cf6d3" Nov 25 11:49:29 crc kubenswrapper[4706]: I1125 11:49:29.110573 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-942d2"] Nov 25 11:49:29 crc kubenswrapper[4706]: I1125 11:49:29.116101 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-942d2"] Nov 25 11:49:29 crc kubenswrapper[4706]: I1125 11:49:29.153770 4706 scope.go:117] "RemoveContainer" containerID="942e0b26ce986512a943c232ef66f8b6af87f039ae5d3111ce7113ed03a8afcc" Nov 25 11:49:29 crc kubenswrapper[4706]: I1125 11:49:29.178878 4706 scope.go:117] "RemoveContainer" containerID="e133ff4c9a278dd34918625a1aca782c284818404f5841b1037dca0777466304" Nov 25 11:49:29 crc kubenswrapper[4706]: E1125 11:49:29.181367 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e133ff4c9a278dd34918625a1aca782c284818404f5841b1037dca0777466304\": container with ID starting with e133ff4c9a278dd34918625a1aca782c284818404f5841b1037dca0777466304 not found: ID does not exist" containerID="e133ff4c9a278dd34918625a1aca782c284818404f5841b1037dca0777466304" Nov 25 11:49:29 crc kubenswrapper[4706]: I1125 11:49:29.181510 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e133ff4c9a278dd34918625a1aca782c284818404f5841b1037dca0777466304"} err="failed to get container status \"e133ff4c9a278dd34918625a1aca782c284818404f5841b1037dca0777466304\": rpc error: code = NotFound desc = could not find container \"e133ff4c9a278dd34918625a1aca782c284818404f5841b1037dca0777466304\": container with ID starting with e133ff4c9a278dd34918625a1aca782c284818404f5841b1037dca0777466304 not found: ID does not exist" Nov 25 11:49:29 crc kubenswrapper[4706]: I1125 11:49:29.181626 4706 scope.go:117] "RemoveContainer" containerID="d2276bdce9a2332424fbe4c644b9174b3576145aa2defe52212632625b5cf6d3" Nov 25 11:49:29 crc kubenswrapper[4706]: E1125 11:49:29.181973 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2276bdce9a2332424fbe4c644b9174b3576145aa2defe52212632625b5cf6d3\": container with ID starting with d2276bdce9a2332424fbe4c644b9174b3576145aa2defe52212632625b5cf6d3 not found: ID does not exist" containerID="d2276bdce9a2332424fbe4c644b9174b3576145aa2defe52212632625b5cf6d3" Nov 25 11:49:29 crc kubenswrapper[4706]: I1125 11:49:29.182010 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2276bdce9a2332424fbe4c644b9174b3576145aa2defe52212632625b5cf6d3"} err="failed to get container status \"d2276bdce9a2332424fbe4c644b9174b3576145aa2defe52212632625b5cf6d3\": rpc error: code = NotFound desc = could not find container \"d2276bdce9a2332424fbe4c644b9174b3576145aa2defe52212632625b5cf6d3\": container with ID starting with d2276bdce9a2332424fbe4c644b9174b3576145aa2defe52212632625b5cf6d3 not found: ID does not exist" Nov 25 11:49:29 crc kubenswrapper[4706]: I1125 11:49:29.182028 4706 scope.go:117] "RemoveContainer" containerID="942e0b26ce986512a943c232ef66f8b6af87f039ae5d3111ce7113ed03a8afcc" Nov 25 11:49:29 crc kubenswrapper[4706]: E1125 11:49:29.182274 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"942e0b26ce986512a943c232ef66f8b6af87f039ae5d3111ce7113ed03a8afcc\": container with ID starting with 942e0b26ce986512a943c232ef66f8b6af87f039ae5d3111ce7113ed03a8afcc not found: ID does not exist" containerID="942e0b26ce986512a943c232ef66f8b6af87f039ae5d3111ce7113ed03a8afcc" Nov 25 11:49:29 crc kubenswrapper[4706]: I1125 11:49:29.182314 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"942e0b26ce986512a943c232ef66f8b6af87f039ae5d3111ce7113ed03a8afcc"} err="failed to get container status \"942e0b26ce986512a943c232ef66f8b6af87f039ae5d3111ce7113ed03a8afcc\": rpc error: code = NotFound desc = could not find container \"942e0b26ce986512a943c232ef66f8b6af87f039ae5d3111ce7113ed03a8afcc\": container with ID starting with 942e0b26ce986512a943c232ef66f8b6af87f039ae5d3111ce7113ed03a8afcc not found: ID does not exist" Nov 25 11:49:29 crc kubenswrapper[4706]: I1125 11:49:29.783203 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6b648576cb-skcws" Nov 25 11:49:29 crc kubenswrapper[4706]: I1125 11:49:29.784032 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6b648576cb-skcws" Nov 25 11:49:29 crc kubenswrapper[4706]: I1125 11:49:29.791216 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6b648576cb-skcws" Nov 25 11:49:29 crc kubenswrapper[4706]: I1125 11:49:29.944429 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49" path="/var/lib/kubelet/pods/35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49/volumes" Nov 25 11:49:30 crc kubenswrapper[4706]: I1125 11:49:30.082083 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6b648576cb-skcws" Nov 25 11:49:30 crc kubenswrapper[4706]: I1125 11:49:30.145577 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-8f48m"] Nov 25 11:49:31 crc kubenswrapper[4706]: I1125 11:49:31.124778 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 11:49:31 crc kubenswrapper[4706]: I1125 11:49:31.125164 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 11:49:31 crc kubenswrapper[4706]: I1125 11:49:31.125230 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" Nov 25 11:49:31 crc kubenswrapper[4706]: I1125 11:49:31.125918 4706 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"683756e714349294998bf9e4fc9b79c9b932ba51c675e9492a76d30885edc873"} pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 11:49:31 crc kubenswrapper[4706]: I1125 11:49:31.125988 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" containerID="cri-o://683756e714349294998bf9e4fc9b79c9b932ba51c675e9492a76d30885edc873" gracePeriod=600 Nov 25 11:49:32 crc kubenswrapper[4706]: I1125 11:49:32.091939 4706 generic.go:334] "Generic (PLEG): container finished" podID="0930887a-320c-4506-8c9c-f94d6d64516a" containerID="683756e714349294998bf9e4fc9b79c9b932ba51c675e9492a76d30885edc873" exitCode=0 Nov 25 11:49:32 crc kubenswrapper[4706]: I1125 11:49:32.092006 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" event={"ID":"0930887a-320c-4506-8c9c-f94d6d64516a","Type":"ContainerDied","Data":"683756e714349294998bf9e4fc9b79c9b932ba51c675e9492a76d30885edc873"} Nov 25 11:49:32 crc kubenswrapper[4706]: I1125 11:49:32.092083 4706 scope.go:117] "RemoveContainer" containerID="0dd63e85870564c9c1e19ba8f686c8d7b197f9c962efb9def7912bf046e425dd" Nov 25 11:49:34 crc kubenswrapper[4706]: I1125 11:49:34.460528 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-qkksf" Nov 25 11:49:37 crc kubenswrapper[4706]: I1125 11:49:37.133390 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" event={"ID":"0930887a-320c-4506-8c9c-f94d6d64516a","Type":"ContainerStarted","Data":"fdd2404bf73191f443033ee21a4507eceb1c00713641b2459642f00fc3611d21"} Nov 25 11:49:37 crc kubenswrapper[4706]: I1125 11:49:37.137463 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-rd4nq" event={"ID":"a206555f-6ea8-4dbc-83db-801c57226c13","Type":"ContainerStarted","Data":"7accf100fba574fd44bbc4d1a6a7d38de84844cdd79b16d9e0cc2fcf4652d552"} Nov 25 11:49:39 crc kubenswrapper[4706]: I1125 11:49:39.388072 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-k7vl7" Nov 25 11:49:39 crc kubenswrapper[4706]: I1125 11:49:39.411950 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-rd4nq" podStartSLOduration=3.59450711 podStartE2EDuration="20.411907067s" podCreationTimestamp="2025-11-25 11:49:19 +0000 UTC" firstStartedPulling="2025-11-25 11:49:19.925217473 +0000 UTC m=+768.839774854" lastFinishedPulling="2025-11-25 11:49:36.74261743 +0000 UTC m=+785.657174811" observedRunningTime="2025-11-25 11:49:37.181438217 +0000 UTC m=+786.095995598" watchObservedRunningTime="2025-11-25 11:49:39.411907067 +0000 UTC m=+788.326464448" Nov 25 11:49:51 crc kubenswrapper[4706]: I1125 11:49:51.554914 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn"] Nov 25 11:49:51 crc kubenswrapper[4706]: E1125 11:49:51.555960 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49" containerName="registry-server" Nov 25 11:49:51 crc kubenswrapper[4706]: I1125 11:49:51.555976 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49" containerName="registry-server" Nov 25 11:49:51 crc kubenswrapper[4706]: E1125 11:49:51.555987 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49" containerName="extract-utilities" Nov 25 11:49:51 crc kubenswrapper[4706]: I1125 11:49:51.555994 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49" containerName="extract-utilities" Nov 25 11:49:51 crc kubenswrapper[4706]: E1125 11:49:51.556009 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49" containerName="extract-content" Nov 25 11:49:51 crc kubenswrapper[4706]: I1125 11:49:51.556015 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49" containerName="extract-content" Nov 25 11:49:51 crc kubenswrapper[4706]: I1125 11:49:51.556130 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="35b0ea9c-5ad8-4d74-a2ce-8d59e3a60f49" containerName="registry-server" Nov 25 11:49:51 crc kubenswrapper[4706]: I1125 11:49:51.557081 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn" Nov 25 11:49:51 crc kubenswrapper[4706]: I1125 11:49:51.559891 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 25 11:49:51 crc kubenswrapper[4706]: I1125 11:49:51.570623 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn"] Nov 25 11:49:51 crc kubenswrapper[4706]: I1125 11:49:51.707017 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn\" (UID: \"8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn" Nov 25 11:49:51 crc kubenswrapper[4706]: I1125 11:49:51.707091 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn\" (UID: \"8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn" Nov 25 11:49:51 crc kubenswrapper[4706]: I1125 11:49:51.707157 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz9h6\" (UniqueName: \"kubernetes.io/projected/8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532-kube-api-access-sz9h6\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn\" (UID: \"8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn" Nov 25 11:49:51 crc kubenswrapper[4706]: I1125 11:49:51.808789 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz9h6\" (UniqueName: \"kubernetes.io/projected/8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532-kube-api-access-sz9h6\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn\" (UID: \"8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn" Nov 25 11:49:51 crc kubenswrapper[4706]: I1125 11:49:51.808881 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn\" (UID: \"8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn" Nov 25 11:49:51 crc kubenswrapper[4706]: I1125 11:49:51.808907 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn\" (UID: \"8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn" Nov 25 11:49:51 crc kubenswrapper[4706]: I1125 11:49:51.809581 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn\" (UID: \"8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn" Nov 25 11:49:51 crc kubenswrapper[4706]: I1125 11:49:51.809626 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn\" (UID: \"8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn" Nov 25 11:49:51 crc kubenswrapper[4706]: I1125 11:49:51.831527 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz9h6\" (UniqueName: \"kubernetes.io/projected/8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532-kube-api-access-sz9h6\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn\" (UID: \"8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn" Nov 25 11:49:51 crc kubenswrapper[4706]: I1125 11:49:51.875076 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn" Nov 25 11:49:52 crc kubenswrapper[4706]: I1125 11:49:52.312539 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn"] Nov 25 11:49:53 crc kubenswrapper[4706]: I1125 11:49:53.249901 4706 generic.go:334] "Generic (PLEG): container finished" podID="8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532" containerID="3568499fcfe793433378e4f8ae94b5af3152956b94eac6ac53bb9317c4b2abe7" exitCode=0 Nov 25 11:49:53 crc kubenswrapper[4706]: I1125 11:49:53.250262 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn" event={"ID":"8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532","Type":"ContainerDied","Data":"3568499fcfe793433378e4f8ae94b5af3152956b94eac6ac53bb9317c4b2abe7"} Nov 25 11:49:53 crc kubenswrapper[4706]: I1125 11:49:53.250373 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn" event={"ID":"8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532","Type":"ContainerStarted","Data":"c0fb4be807b81c4cd728d38a1f5c1bc9895d5223361cc0d705cf72c509f79135"} Nov 25 11:49:55 crc kubenswrapper[4706]: I1125 11:49:55.193366 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-8f48m" podUID="028d4ff3-870d-4002-843f-5381587e28fc" containerName="console" containerID="cri-o://8775e9a8f2126da2322f21e9e41b07221c4efa4814080ba886ee52fd5307941f" gracePeriod=15 Nov 25 11:49:55 crc kubenswrapper[4706]: I1125 11:49:55.267361 4706 generic.go:334] "Generic (PLEG): container finished" podID="8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532" containerID="a35772aa5210d821bf4b2465c22bcbf5100052fb60e231c57400a8ddbedb85e1" exitCode=0 Nov 25 11:49:55 crc kubenswrapper[4706]: I1125 11:49:55.267432 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn" event={"ID":"8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532","Type":"ContainerDied","Data":"a35772aa5210d821bf4b2465c22bcbf5100052fb60e231c57400a8ddbedb85e1"} Nov 25 11:49:55 crc kubenswrapper[4706]: I1125 11:49:55.653208 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-8f48m_028d4ff3-870d-4002-843f-5381587e28fc/console/0.log" Nov 25 11:49:55 crc kubenswrapper[4706]: I1125 11:49:55.653687 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8f48m" Nov 25 11:49:55 crc kubenswrapper[4706]: I1125 11:49:55.702468 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/028d4ff3-870d-4002-843f-5381587e28fc-console-serving-cert\") pod \"028d4ff3-870d-4002-843f-5381587e28fc\" (UID: \"028d4ff3-870d-4002-843f-5381587e28fc\") " Nov 25 11:49:55 crc kubenswrapper[4706]: I1125 11:49:55.702531 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/028d4ff3-870d-4002-843f-5381587e28fc-console-config\") pod \"028d4ff3-870d-4002-843f-5381587e28fc\" (UID: \"028d4ff3-870d-4002-843f-5381587e28fc\") " Nov 25 11:49:55 crc kubenswrapper[4706]: I1125 11:49:55.702589 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8j2l\" (UniqueName: \"kubernetes.io/projected/028d4ff3-870d-4002-843f-5381587e28fc-kube-api-access-h8j2l\") pod \"028d4ff3-870d-4002-843f-5381587e28fc\" (UID: \"028d4ff3-870d-4002-843f-5381587e28fc\") " Nov 25 11:49:55 crc kubenswrapper[4706]: I1125 11:49:55.702663 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/028d4ff3-870d-4002-843f-5381587e28fc-oauth-serving-cert\") pod \"028d4ff3-870d-4002-843f-5381587e28fc\" (UID: \"028d4ff3-870d-4002-843f-5381587e28fc\") " Nov 25 11:49:55 crc kubenswrapper[4706]: I1125 11:49:55.702720 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/028d4ff3-870d-4002-843f-5381587e28fc-console-oauth-config\") pod \"028d4ff3-870d-4002-843f-5381587e28fc\" (UID: \"028d4ff3-870d-4002-843f-5381587e28fc\") " Nov 25 11:49:55 crc kubenswrapper[4706]: I1125 11:49:55.703526 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/028d4ff3-870d-4002-843f-5381587e28fc-service-ca\") pod \"028d4ff3-870d-4002-843f-5381587e28fc\" (UID: \"028d4ff3-870d-4002-843f-5381587e28fc\") " Nov 25 11:49:55 crc kubenswrapper[4706]: I1125 11:49:55.703689 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/028d4ff3-870d-4002-843f-5381587e28fc-trusted-ca-bundle\") pod \"028d4ff3-870d-4002-843f-5381587e28fc\" (UID: \"028d4ff3-870d-4002-843f-5381587e28fc\") " Nov 25 11:49:55 crc kubenswrapper[4706]: I1125 11:49:55.704446 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/028d4ff3-870d-4002-843f-5381587e28fc-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "028d4ff3-870d-4002-843f-5381587e28fc" (UID: "028d4ff3-870d-4002-843f-5381587e28fc"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:49:55 crc kubenswrapper[4706]: I1125 11:49:55.704473 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/028d4ff3-870d-4002-843f-5381587e28fc-console-config" (OuterVolumeSpecName: "console-config") pod "028d4ff3-870d-4002-843f-5381587e28fc" (UID: "028d4ff3-870d-4002-843f-5381587e28fc"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:49:55 crc kubenswrapper[4706]: I1125 11:49:55.704594 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/028d4ff3-870d-4002-843f-5381587e28fc-service-ca" (OuterVolumeSpecName: "service-ca") pod "028d4ff3-870d-4002-843f-5381587e28fc" (UID: "028d4ff3-870d-4002-843f-5381587e28fc"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:49:55 crc kubenswrapper[4706]: I1125 11:49:55.704886 4706 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/028d4ff3-870d-4002-843f-5381587e28fc-console-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:49:55 crc kubenswrapper[4706]: I1125 11:49:55.704913 4706 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/028d4ff3-870d-4002-843f-5381587e28fc-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 11:49:55 crc kubenswrapper[4706]: I1125 11:49:55.704926 4706 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/028d4ff3-870d-4002-843f-5381587e28fc-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:49:55 crc kubenswrapper[4706]: I1125 11:49:55.705054 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/028d4ff3-870d-4002-843f-5381587e28fc-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "028d4ff3-870d-4002-843f-5381587e28fc" (UID: "028d4ff3-870d-4002-843f-5381587e28fc"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:49:55 crc kubenswrapper[4706]: I1125 11:49:55.709711 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/028d4ff3-870d-4002-843f-5381587e28fc-kube-api-access-h8j2l" (OuterVolumeSpecName: "kube-api-access-h8j2l") pod "028d4ff3-870d-4002-843f-5381587e28fc" (UID: "028d4ff3-870d-4002-843f-5381587e28fc"). InnerVolumeSpecName "kube-api-access-h8j2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:49:55 crc kubenswrapper[4706]: I1125 11:49:55.709786 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/028d4ff3-870d-4002-843f-5381587e28fc-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "028d4ff3-870d-4002-843f-5381587e28fc" (UID: "028d4ff3-870d-4002-843f-5381587e28fc"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:49:55 crc kubenswrapper[4706]: I1125 11:49:55.709957 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/028d4ff3-870d-4002-843f-5381587e28fc-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "028d4ff3-870d-4002-843f-5381587e28fc" (UID: "028d4ff3-870d-4002-843f-5381587e28fc"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:49:55 crc kubenswrapper[4706]: I1125 11:49:55.806411 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8j2l\" (UniqueName: \"kubernetes.io/projected/028d4ff3-870d-4002-843f-5381587e28fc-kube-api-access-h8j2l\") on node \"crc\" DevicePath \"\"" Nov 25 11:49:55 crc kubenswrapper[4706]: I1125 11:49:55.806499 4706 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/028d4ff3-870d-4002-843f-5381587e28fc-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 11:49:55 crc kubenswrapper[4706]: I1125 11:49:55.806512 4706 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/028d4ff3-870d-4002-843f-5381587e28fc-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:49:55 crc kubenswrapper[4706]: I1125 11:49:55.806528 4706 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/028d4ff3-870d-4002-843f-5381587e28fc-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 11:49:56 crc kubenswrapper[4706]: I1125 11:49:56.277122 4706 generic.go:334] "Generic (PLEG): container finished" podID="8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532" containerID="8337daaeeb82728912e6cbcfe648c6053ecb3883a153d03460b657e7509bbc2a" exitCode=0 Nov 25 11:49:56 crc kubenswrapper[4706]: I1125 11:49:56.277256 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn" event={"ID":"8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532","Type":"ContainerDied","Data":"8337daaeeb82728912e6cbcfe648c6053ecb3883a153d03460b657e7509bbc2a"} Nov 25 11:49:56 crc kubenswrapper[4706]: I1125 11:49:56.279130 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-8f48m_028d4ff3-870d-4002-843f-5381587e28fc/console/0.log" Nov 25 11:49:56 crc kubenswrapper[4706]: I1125 11:49:56.279186 4706 generic.go:334] "Generic (PLEG): container finished" podID="028d4ff3-870d-4002-843f-5381587e28fc" containerID="8775e9a8f2126da2322f21e9e41b07221c4efa4814080ba886ee52fd5307941f" exitCode=2 Nov 25 11:49:56 crc kubenswrapper[4706]: I1125 11:49:56.279215 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8f48m" event={"ID":"028d4ff3-870d-4002-843f-5381587e28fc","Type":"ContainerDied","Data":"8775e9a8f2126da2322f21e9e41b07221c4efa4814080ba886ee52fd5307941f"} Nov 25 11:49:56 crc kubenswrapper[4706]: I1125 11:49:56.279264 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8f48m" event={"ID":"028d4ff3-870d-4002-843f-5381587e28fc","Type":"ContainerDied","Data":"c93f402d83d190e7bda96e6580d611d46d04715cb47032ce3fbc7cf8603b61e8"} Nov 25 11:49:56 crc kubenswrapper[4706]: I1125 11:49:56.279283 4706 scope.go:117] "RemoveContainer" containerID="8775e9a8f2126da2322f21e9e41b07221c4efa4814080ba886ee52fd5307941f" Nov 25 11:49:56 crc kubenswrapper[4706]: I1125 11:49:56.279379 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8f48m" Nov 25 11:49:56 crc kubenswrapper[4706]: I1125 11:49:56.308463 4706 scope.go:117] "RemoveContainer" containerID="8775e9a8f2126da2322f21e9e41b07221c4efa4814080ba886ee52fd5307941f" Nov 25 11:49:56 crc kubenswrapper[4706]: E1125 11:49:56.309252 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8775e9a8f2126da2322f21e9e41b07221c4efa4814080ba886ee52fd5307941f\": container with ID starting with 8775e9a8f2126da2322f21e9e41b07221c4efa4814080ba886ee52fd5307941f not found: ID does not exist" containerID="8775e9a8f2126da2322f21e9e41b07221c4efa4814080ba886ee52fd5307941f" Nov 25 11:49:56 crc kubenswrapper[4706]: I1125 11:49:56.309313 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8775e9a8f2126da2322f21e9e41b07221c4efa4814080ba886ee52fd5307941f"} err="failed to get container status \"8775e9a8f2126da2322f21e9e41b07221c4efa4814080ba886ee52fd5307941f\": rpc error: code = NotFound desc = could not find container \"8775e9a8f2126da2322f21e9e41b07221c4efa4814080ba886ee52fd5307941f\": container with ID starting with 8775e9a8f2126da2322f21e9e41b07221c4efa4814080ba886ee52fd5307941f not found: ID does not exist" Nov 25 11:49:56 crc kubenswrapper[4706]: I1125 11:49:56.330214 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-8f48m"] Nov 25 11:49:56 crc kubenswrapper[4706]: I1125 11:49:56.336071 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-8f48m"] Nov 25 11:49:57 crc kubenswrapper[4706]: I1125 11:49:57.547570 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn" Nov 25 11:49:57 crc kubenswrapper[4706]: I1125 11:49:57.636724 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz9h6\" (UniqueName: \"kubernetes.io/projected/8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532-kube-api-access-sz9h6\") pod \"8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532\" (UID: \"8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532\") " Nov 25 11:49:57 crc kubenswrapper[4706]: I1125 11:49:57.636880 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532-util\") pod \"8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532\" (UID: \"8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532\") " Nov 25 11:49:57 crc kubenswrapper[4706]: I1125 11:49:57.636927 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532-bundle\") pod \"8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532\" (UID: \"8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532\") " Nov 25 11:49:57 crc kubenswrapper[4706]: I1125 11:49:57.638521 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532-bundle" (OuterVolumeSpecName: "bundle") pod "8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532" (UID: "8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:49:57 crc kubenswrapper[4706]: I1125 11:49:57.644131 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532-kube-api-access-sz9h6" (OuterVolumeSpecName: "kube-api-access-sz9h6") pod "8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532" (UID: "8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532"). InnerVolumeSpecName "kube-api-access-sz9h6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:49:57 crc kubenswrapper[4706]: I1125 11:49:57.656970 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532-util" (OuterVolumeSpecName: "util") pod "8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532" (UID: "8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:49:57 crc kubenswrapper[4706]: I1125 11:49:57.739076 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz9h6\" (UniqueName: \"kubernetes.io/projected/8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532-kube-api-access-sz9h6\") on node \"crc\" DevicePath \"\"" Nov 25 11:49:57 crc kubenswrapper[4706]: I1125 11:49:57.739131 4706 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532-util\") on node \"crc\" DevicePath \"\"" Nov 25 11:49:57 crc kubenswrapper[4706]: I1125 11:49:57.739147 4706 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:49:57 crc kubenswrapper[4706]: I1125 11:49:57.931863 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="028d4ff3-870d-4002-843f-5381587e28fc" path="/var/lib/kubelet/pods/028d4ff3-870d-4002-843f-5381587e28fc/volumes" Nov 25 11:49:58 crc kubenswrapper[4706]: I1125 11:49:58.296107 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn" event={"ID":"8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532","Type":"ContainerDied","Data":"c0fb4be807b81c4cd728d38a1f5c1bc9895d5223361cc0d705cf72c509f79135"} Nov 25 11:49:58 crc kubenswrapper[4706]: I1125 11:49:58.296174 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0fb4be807b81c4cd728d38a1f5c1bc9895d5223361cc0d705cf72c509f79135" Nov 25 11:49:58 crc kubenswrapper[4706]: I1125 11:49:58.296187 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn" Nov 25 11:50:06 crc kubenswrapper[4706]: I1125 11:50:06.962433 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj"] Nov 25 11:50:06 crc kubenswrapper[4706]: E1125 11:50:06.963619 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532" containerName="util" Nov 25 11:50:06 crc kubenswrapper[4706]: I1125 11:50:06.963635 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532" containerName="util" Nov 25 11:50:06 crc kubenswrapper[4706]: E1125 11:50:06.963651 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="028d4ff3-870d-4002-843f-5381587e28fc" containerName="console" Nov 25 11:50:06 crc kubenswrapper[4706]: I1125 11:50:06.963658 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="028d4ff3-870d-4002-843f-5381587e28fc" containerName="console" Nov 25 11:50:06 crc kubenswrapper[4706]: E1125 11:50:06.963668 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532" containerName="pull" Nov 25 11:50:06 crc kubenswrapper[4706]: I1125 11:50:06.963674 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532" containerName="pull" Nov 25 11:50:06 crc kubenswrapper[4706]: E1125 11:50:06.963682 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532" containerName="extract" Nov 25 11:50:06 crc kubenswrapper[4706]: I1125 11:50:06.963689 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532" containerName="extract" Nov 25 11:50:06 crc kubenswrapper[4706]: I1125 11:50:06.963797 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532" containerName="extract" Nov 25 11:50:06 crc kubenswrapper[4706]: I1125 11:50:06.963810 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="028d4ff3-870d-4002-843f-5381587e28fc" containerName="console" Nov 25 11:50:06 crc kubenswrapper[4706]: I1125 11:50:06.964570 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" Nov 25 11:50:06 crc kubenswrapper[4706]: I1125 11:50:06.971790 4706 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 25 11:50:06 crc kubenswrapper[4706]: I1125 11:50:06.972542 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 25 11:50:06 crc kubenswrapper[4706]: I1125 11:50:06.972657 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 25 11:50:06 crc kubenswrapper[4706]: I1125 11:50:06.972736 4706 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 25 11:50:06 crc kubenswrapper[4706]: I1125 11:50:06.976380 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cdb2d830-fbc9-4336-83b7-0392051670cb-apiservice-cert\") pod \"metallb-operator-controller-manager-7d76b4f6c7-xxkgj\" (UID: \"cdb2d830-fbc9-4336-83b7-0392051670cb\") " pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" Nov 25 11:50:06 crc kubenswrapper[4706]: I1125 11:50:06.976450 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rkjc\" (UniqueName: \"kubernetes.io/projected/cdb2d830-fbc9-4336-83b7-0392051670cb-kube-api-access-5rkjc\") pod \"metallb-operator-controller-manager-7d76b4f6c7-xxkgj\" (UID: \"cdb2d830-fbc9-4336-83b7-0392051670cb\") " pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" Nov 25 11:50:06 crc kubenswrapper[4706]: I1125 11:50:06.976508 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cdb2d830-fbc9-4336-83b7-0392051670cb-webhook-cert\") pod \"metallb-operator-controller-manager-7d76b4f6c7-xxkgj\" (UID: \"cdb2d830-fbc9-4336-83b7-0392051670cb\") " pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" Nov 25 11:50:06 crc kubenswrapper[4706]: I1125 11:50:06.978689 4706 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-4whb8" Nov 25 11:50:07 crc kubenswrapper[4706]: I1125 11:50:07.074237 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj"] Nov 25 11:50:07 crc kubenswrapper[4706]: I1125 11:50:07.078218 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cdb2d830-fbc9-4336-83b7-0392051670cb-apiservice-cert\") pod \"metallb-operator-controller-manager-7d76b4f6c7-xxkgj\" (UID: \"cdb2d830-fbc9-4336-83b7-0392051670cb\") " pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" Nov 25 11:50:07 crc kubenswrapper[4706]: I1125 11:50:07.078285 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rkjc\" (UniqueName: \"kubernetes.io/projected/cdb2d830-fbc9-4336-83b7-0392051670cb-kube-api-access-5rkjc\") pod \"metallb-operator-controller-manager-7d76b4f6c7-xxkgj\" (UID: \"cdb2d830-fbc9-4336-83b7-0392051670cb\") " pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" Nov 25 11:50:07 crc kubenswrapper[4706]: I1125 11:50:07.078354 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cdb2d830-fbc9-4336-83b7-0392051670cb-webhook-cert\") pod \"metallb-operator-controller-manager-7d76b4f6c7-xxkgj\" (UID: \"cdb2d830-fbc9-4336-83b7-0392051670cb\") " pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" Nov 25 11:50:07 crc kubenswrapper[4706]: I1125 11:50:07.113202 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cdb2d830-fbc9-4336-83b7-0392051670cb-apiservice-cert\") pod \"metallb-operator-controller-manager-7d76b4f6c7-xxkgj\" (UID: \"cdb2d830-fbc9-4336-83b7-0392051670cb\") " pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" Nov 25 11:50:07 crc kubenswrapper[4706]: I1125 11:50:07.115122 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cdb2d830-fbc9-4336-83b7-0392051670cb-webhook-cert\") pod \"metallb-operator-controller-manager-7d76b4f6c7-xxkgj\" (UID: \"cdb2d830-fbc9-4336-83b7-0392051670cb\") " pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" Nov 25 11:50:07 crc kubenswrapper[4706]: I1125 11:50:07.124142 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rkjc\" (UniqueName: \"kubernetes.io/projected/cdb2d830-fbc9-4336-83b7-0392051670cb-kube-api-access-5rkjc\") pod \"metallb-operator-controller-manager-7d76b4f6c7-xxkgj\" (UID: \"cdb2d830-fbc9-4336-83b7-0392051670cb\") " pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" Nov 25 11:50:07 crc kubenswrapper[4706]: I1125 11:50:07.293013 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" Nov 25 11:50:07 crc kubenswrapper[4706]: I1125 11:50:07.384951 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7c9ff6b49c-x86mq"] Nov 25 11:50:07 crc kubenswrapper[4706]: I1125 11:50:07.386161 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7c9ff6b49c-x86mq" Nov 25 11:50:07 crc kubenswrapper[4706]: I1125 11:50:07.390920 4706 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 25 11:50:07 crc kubenswrapper[4706]: I1125 11:50:07.391107 4706 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 25 11:50:07 crc kubenswrapper[4706]: I1125 11:50:07.391169 4706 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-7cdjf" Nov 25 11:50:07 crc kubenswrapper[4706]: I1125 11:50:07.408992 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7c9ff6b49c-x86mq"] Nov 25 11:50:07 crc kubenswrapper[4706]: I1125 11:50:07.585943 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2cb3fa9d-f614-42af-80c5-deb2e1fdb90d-apiservice-cert\") pod \"metallb-operator-webhook-server-7c9ff6b49c-x86mq\" (UID: \"2cb3fa9d-f614-42af-80c5-deb2e1fdb90d\") " pod="metallb-system/metallb-operator-webhook-server-7c9ff6b49c-x86mq" Nov 25 11:50:07 crc kubenswrapper[4706]: I1125 11:50:07.586482 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjzr9\" (UniqueName: \"kubernetes.io/projected/2cb3fa9d-f614-42af-80c5-deb2e1fdb90d-kube-api-access-sjzr9\") pod \"metallb-operator-webhook-server-7c9ff6b49c-x86mq\" (UID: \"2cb3fa9d-f614-42af-80c5-deb2e1fdb90d\") " pod="metallb-system/metallb-operator-webhook-server-7c9ff6b49c-x86mq" Nov 25 11:50:07 crc kubenswrapper[4706]: I1125 11:50:07.586526 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2cb3fa9d-f614-42af-80c5-deb2e1fdb90d-webhook-cert\") pod \"metallb-operator-webhook-server-7c9ff6b49c-x86mq\" (UID: \"2cb3fa9d-f614-42af-80c5-deb2e1fdb90d\") " pod="metallb-system/metallb-operator-webhook-server-7c9ff6b49c-x86mq" Nov 25 11:50:07 crc kubenswrapper[4706]: I1125 11:50:07.688160 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjzr9\" (UniqueName: \"kubernetes.io/projected/2cb3fa9d-f614-42af-80c5-deb2e1fdb90d-kube-api-access-sjzr9\") pod \"metallb-operator-webhook-server-7c9ff6b49c-x86mq\" (UID: \"2cb3fa9d-f614-42af-80c5-deb2e1fdb90d\") " pod="metallb-system/metallb-operator-webhook-server-7c9ff6b49c-x86mq" Nov 25 11:50:07 crc kubenswrapper[4706]: I1125 11:50:07.688622 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2cb3fa9d-f614-42af-80c5-deb2e1fdb90d-webhook-cert\") pod \"metallb-operator-webhook-server-7c9ff6b49c-x86mq\" (UID: \"2cb3fa9d-f614-42af-80c5-deb2e1fdb90d\") " pod="metallb-system/metallb-operator-webhook-server-7c9ff6b49c-x86mq" Nov 25 11:50:07 crc kubenswrapper[4706]: I1125 11:50:07.688769 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2cb3fa9d-f614-42af-80c5-deb2e1fdb90d-apiservice-cert\") pod \"metallb-operator-webhook-server-7c9ff6b49c-x86mq\" (UID: \"2cb3fa9d-f614-42af-80c5-deb2e1fdb90d\") " pod="metallb-system/metallb-operator-webhook-server-7c9ff6b49c-x86mq" Nov 25 11:50:07 crc kubenswrapper[4706]: I1125 11:50:07.695315 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2cb3fa9d-f614-42af-80c5-deb2e1fdb90d-apiservice-cert\") pod \"metallb-operator-webhook-server-7c9ff6b49c-x86mq\" (UID: \"2cb3fa9d-f614-42af-80c5-deb2e1fdb90d\") " pod="metallb-system/metallb-operator-webhook-server-7c9ff6b49c-x86mq" Nov 25 11:50:07 crc kubenswrapper[4706]: I1125 11:50:07.695321 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2cb3fa9d-f614-42af-80c5-deb2e1fdb90d-webhook-cert\") pod \"metallb-operator-webhook-server-7c9ff6b49c-x86mq\" (UID: \"2cb3fa9d-f614-42af-80c5-deb2e1fdb90d\") " pod="metallb-system/metallb-operator-webhook-server-7c9ff6b49c-x86mq" Nov 25 11:50:07 crc kubenswrapper[4706]: I1125 11:50:07.711536 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjzr9\" (UniqueName: \"kubernetes.io/projected/2cb3fa9d-f614-42af-80c5-deb2e1fdb90d-kube-api-access-sjzr9\") pod \"metallb-operator-webhook-server-7c9ff6b49c-x86mq\" (UID: \"2cb3fa9d-f614-42af-80c5-deb2e1fdb90d\") " pod="metallb-system/metallb-operator-webhook-server-7c9ff6b49c-x86mq" Nov 25 11:50:07 crc kubenswrapper[4706]: I1125 11:50:07.742578 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7c9ff6b49c-x86mq" Nov 25 11:50:07 crc kubenswrapper[4706]: I1125 11:50:07.820146 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj"] Nov 25 11:50:08 crc kubenswrapper[4706]: I1125 11:50:08.221396 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7c9ff6b49c-x86mq"] Nov 25 11:50:08 crc kubenswrapper[4706]: I1125 11:50:08.368678 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7c9ff6b49c-x86mq" event={"ID":"2cb3fa9d-f614-42af-80c5-deb2e1fdb90d","Type":"ContainerStarted","Data":"c8bebc55c3e53feb9f2359bc6228fe298b60ea7ce509e0dc707cbd1657de3cee"} Nov 25 11:50:08 crc kubenswrapper[4706]: I1125 11:50:08.369783 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" event={"ID":"cdb2d830-fbc9-4336-83b7-0392051670cb","Type":"ContainerStarted","Data":"ec485c7dd7c1436967c6e1b12730ad578c61d7d3c191cb5e8c7acef61266edaa"} Nov 25 11:50:17 crc kubenswrapper[4706]: I1125 11:50:17.464319 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" event={"ID":"cdb2d830-fbc9-4336-83b7-0392051670cb","Type":"ContainerStarted","Data":"caeb4d66adfe0318a9d715726ff566dfee8083fce21ac6c0307644f0f428b707"} Nov 25 11:50:17 crc kubenswrapper[4706]: I1125 11:50:17.465071 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" Nov 25 11:50:17 crc kubenswrapper[4706]: I1125 11:50:17.489460 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" podStartSLOduration=2.719107088 podStartE2EDuration="11.489429789s" podCreationTimestamp="2025-11-25 11:50:06 +0000 UTC" firstStartedPulling="2025-11-25 11:50:07.832281511 +0000 UTC m=+816.746838892" lastFinishedPulling="2025-11-25 11:50:16.602604212 +0000 UTC m=+825.517161593" observedRunningTime="2025-11-25 11:50:17.484444295 +0000 UTC m=+826.399001676" watchObservedRunningTime="2025-11-25 11:50:17.489429789 +0000 UTC m=+826.403987180" Nov 25 11:50:19 crc kubenswrapper[4706]: I1125 11:50:19.480091 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7c9ff6b49c-x86mq" event={"ID":"2cb3fa9d-f614-42af-80c5-deb2e1fdb90d","Type":"ContainerStarted","Data":"aa043fe718b2e5061afe83934ce9730571f3fff3844571674e65a8b7b2b9755e"} Nov 25 11:50:19 crc kubenswrapper[4706]: I1125 11:50:19.480571 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7c9ff6b49c-x86mq" Nov 25 11:50:19 crc kubenswrapper[4706]: I1125 11:50:19.516594 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7c9ff6b49c-x86mq" podStartSLOduration=1.8731404170000001 podStartE2EDuration="12.516562614s" podCreationTimestamp="2025-11-25 11:50:07 +0000 UTC" firstStartedPulling="2025-11-25 11:50:08.236138613 +0000 UTC m=+817.150695994" lastFinishedPulling="2025-11-25 11:50:18.87956081 +0000 UTC m=+827.794118191" observedRunningTime="2025-11-25 11:50:19.502893497 +0000 UTC m=+828.417450898" watchObservedRunningTime="2025-11-25 11:50:19.516562614 +0000 UTC m=+828.431120005" Nov 25 11:50:25 crc kubenswrapper[4706]: I1125 11:50:25.690739 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gtm4k"] Nov 25 11:50:25 crc kubenswrapper[4706]: I1125 11:50:25.694571 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gtm4k" Nov 25 11:50:25 crc kubenswrapper[4706]: I1125 11:50:25.715344 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gtm4k"] Nov 25 11:50:25 crc kubenswrapper[4706]: I1125 11:50:25.882785 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ddcee80-0df0-4b20-bfac-603fe90a4f95-catalog-content\") pod \"certified-operators-gtm4k\" (UID: \"1ddcee80-0df0-4b20-bfac-603fe90a4f95\") " pod="openshift-marketplace/certified-operators-gtm4k" Nov 25 11:50:25 crc kubenswrapper[4706]: I1125 11:50:25.882886 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9x2f\" (UniqueName: \"kubernetes.io/projected/1ddcee80-0df0-4b20-bfac-603fe90a4f95-kube-api-access-l9x2f\") pod \"certified-operators-gtm4k\" (UID: \"1ddcee80-0df0-4b20-bfac-603fe90a4f95\") " pod="openshift-marketplace/certified-operators-gtm4k" Nov 25 11:50:25 crc kubenswrapper[4706]: I1125 11:50:25.882942 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ddcee80-0df0-4b20-bfac-603fe90a4f95-utilities\") pod \"certified-operators-gtm4k\" (UID: \"1ddcee80-0df0-4b20-bfac-603fe90a4f95\") " pod="openshift-marketplace/certified-operators-gtm4k" Nov 25 11:50:25 crc kubenswrapper[4706]: I1125 11:50:25.984362 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ddcee80-0df0-4b20-bfac-603fe90a4f95-catalog-content\") pod \"certified-operators-gtm4k\" (UID: \"1ddcee80-0df0-4b20-bfac-603fe90a4f95\") " pod="openshift-marketplace/certified-operators-gtm4k" Nov 25 11:50:25 crc kubenswrapper[4706]: I1125 11:50:25.984478 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9x2f\" (UniqueName: \"kubernetes.io/projected/1ddcee80-0df0-4b20-bfac-603fe90a4f95-kube-api-access-l9x2f\") pod \"certified-operators-gtm4k\" (UID: \"1ddcee80-0df0-4b20-bfac-603fe90a4f95\") " pod="openshift-marketplace/certified-operators-gtm4k" Nov 25 11:50:25 crc kubenswrapper[4706]: I1125 11:50:25.984539 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ddcee80-0df0-4b20-bfac-603fe90a4f95-utilities\") pod \"certified-operators-gtm4k\" (UID: \"1ddcee80-0df0-4b20-bfac-603fe90a4f95\") " pod="openshift-marketplace/certified-operators-gtm4k" Nov 25 11:50:25 crc kubenswrapper[4706]: I1125 11:50:25.984982 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ddcee80-0df0-4b20-bfac-603fe90a4f95-catalog-content\") pod \"certified-operators-gtm4k\" (UID: \"1ddcee80-0df0-4b20-bfac-603fe90a4f95\") " pod="openshift-marketplace/certified-operators-gtm4k" Nov 25 11:50:25 crc kubenswrapper[4706]: I1125 11:50:25.985004 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ddcee80-0df0-4b20-bfac-603fe90a4f95-utilities\") pod \"certified-operators-gtm4k\" (UID: \"1ddcee80-0df0-4b20-bfac-603fe90a4f95\") " pod="openshift-marketplace/certified-operators-gtm4k" Nov 25 11:50:26 crc kubenswrapper[4706]: I1125 11:50:26.008078 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9x2f\" (UniqueName: \"kubernetes.io/projected/1ddcee80-0df0-4b20-bfac-603fe90a4f95-kube-api-access-l9x2f\") pod \"certified-operators-gtm4k\" (UID: \"1ddcee80-0df0-4b20-bfac-603fe90a4f95\") " pod="openshift-marketplace/certified-operators-gtm4k" Nov 25 11:50:26 crc kubenswrapper[4706]: I1125 11:50:26.019126 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gtm4k" Nov 25 11:50:26 crc kubenswrapper[4706]: I1125 11:50:26.506728 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gtm4k"] Nov 25 11:50:26 crc kubenswrapper[4706]: W1125 11:50:26.513183 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ddcee80_0df0_4b20_bfac_603fe90a4f95.slice/crio-b6de0847f338f079ffc4443ee8e4714d0c268d27143d4752a7f98572df6f79fb WatchSource:0}: Error finding container b6de0847f338f079ffc4443ee8e4714d0c268d27143d4752a7f98572df6f79fb: Status 404 returned error can't find the container with id b6de0847f338f079ffc4443ee8e4714d0c268d27143d4752a7f98572df6f79fb Nov 25 11:50:26 crc kubenswrapper[4706]: I1125 11:50:26.545411 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gtm4k" event={"ID":"1ddcee80-0df0-4b20-bfac-603fe90a4f95","Type":"ContainerStarted","Data":"b6de0847f338f079ffc4443ee8e4714d0c268d27143d4752a7f98572df6f79fb"} Nov 25 11:50:27 crc kubenswrapper[4706]: I1125 11:50:27.557392 4706 generic.go:334] "Generic (PLEG): container finished" podID="1ddcee80-0df0-4b20-bfac-603fe90a4f95" containerID="81dad17a0d67e58d58c0dd19379b5271510f88b2f74eb3b327ff80e3f7a3216c" exitCode=0 Nov 25 11:50:27 crc kubenswrapper[4706]: I1125 11:50:27.557527 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gtm4k" event={"ID":"1ddcee80-0df0-4b20-bfac-603fe90a4f95","Type":"ContainerDied","Data":"81dad17a0d67e58d58c0dd19379b5271510f88b2f74eb3b327ff80e3f7a3216c"} Nov 25 11:50:28 crc kubenswrapper[4706]: I1125 11:50:28.566328 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gtm4k" event={"ID":"1ddcee80-0df0-4b20-bfac-603fe90a4f95","Type":"ContainerStarted","Data":"2fcf62f66bc2f1e16ae5c10c3a320631f9d215e4791394cdcdbf41b5f67ea206"} Nov 25 11:50:29 crc kubenswrapper[4706]: I1125 11:50:29.575537 4706 generic.go:334] "Generic (PLEG): container finished" podID="1ddcee80-0df0-4b20-bfac-603fe90a4f95" containerID="2fcf62f66bc2f1e16ae5c10c3a320631f9d215e4791394cdcdbf41b5f67ea206" exitCode=0 Nov 25 11:50:29 crc kubenswrapper[4706]: I1125 11:50:29.575594 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gtm4k" event={"ID":"1ddcee80-0df0-4b20-bfac-603fe90a4f95","Type":"ContainerDied","Data":"2fcf62f66bc2f1e16ae5c10c3a320631f9d215e4791394cdcdbf41b5f67ea206"} Nov 25 11:50:30 crc kubenswrapper[4706]: I1125 11:50:30.585361 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gtm4k" event={"ID":"1ddcee80-0df0-4b20-bfac-603fe90a4f95","Type":"ContainerStarted","Data":"2dd0b0fc406f0a68910c468093b1fc7446a17d130e1dff275be884822c6aef24"} Nov 25 11:50:30 crc kubenswrapper[4706]: I1125 11:50:30.605972 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gtm4k" podStartSLOduration=3.179427606 podStartE2EDuration="5.605951348s" podCreationTimestamp="2025-11-25 11:50:25 +0000 UTC" firstStartedPulling="2025-11-25 11:50:27.559091692 +0000 UTC m=+836.473649073" lastFinishedPulling="2025-11-25 11:50:29.985615434 +0000 UTC m=+838.900172815" observedRunningTime="2025-11-25 11:50:30.603727841 +0000 UTC m=+839.518285232" watchObservedRunningTime="2025-11-25 11:50:30.605951348 +0000 UTC m=+839.520508729" Nov 25 11:50:33 crc kubenswrapper[4706]: I1125 11:50:33.084786 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kdnxf"] Nov 25 11:50:33 crc kubenswrapper[4706]: I1125 11:50:33.086645 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kdnxf" Nov 25 11:50:33 crc kubenswrapper[4706]: I1125 11:50:33.090366 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dld28\" (UniqueName: \"kubernetes.io/projected/1b88b3a8-9948-44ff-980e-3775fe2b490a-kube-api-access-dld28\") pod \"community-operators-kdnxf\" (UID: \"1b88b3a8-9948-44ff-980e-3775fe2b490a\") " pod="openshift-marketplace/community-operators-kdnxf" Nov 25 11:50:33 crc kubenswrapper[4706]: I1125 11:50:33.090421 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b88b3a8-9948-44ff-980e-3775fe2b490a-catalog-content\") pod \"community-operators-kdnxf\" (UID: \"1b88b3a8-9948-44ff-980e-3775fe2b490a\") " pod="openshift-marketplace/community-operators-kdnxf" Nov 25 11:50:33 crc kubenswrapper[4706]: I1125 11:50:33.090481 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b88b3a8-9948-44ff-980e-3775fe2b490a-utilities\") pod \"community-operators-kdnxf\" (UID: \"1b88b3a8-9948-44ff-980e-3775fe2b490a\") " pod="openshift-marketplace/community-operators-kdnxf" Nov 25 11:50:33 crc kubenswrapper[4706]: I1125 11:50:33.106658 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kdnxf"] Nov 25 11:50:33 crc kubenswrapper[4706]: I1125 11:50:33.191655 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dld28\" (UniqueName: \"kubernetes.io/projected/1b88b3a8-9948-44ff-980e-3775fe2b490a-kube-api-access-dld28\") pod \"community-operators-kdnxf\" (UID: \"1b88b3a8-9948-44ff-980e-3775fe2b490a\") " pod="openshift-marketplace/community-operators-kdnxf" Nov 25 11:50:33 crc kubenswrapper[4706]: I1125 11:50:33.191730 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b88b3a8-9948-44ff-980e-3775fe2b490a-catalog-content\") pod \"community-operators-kdnxf\" (UID: \"1b88b3a8-9948-44ff-980e-3775fe2b490a\") " pod="openshift-marketplace/community-operators-kdnxf" Nov 25 11:50:33 crc kubenswrapper[4706]: I1125 11:50:33.191900 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b88b3a8-9948-44ff-980e-3775fe2b490a-utilities\") pod \"community-operators-kdnxf\" (UID: \"1b88b3a8-9948-44ff-980e-3775fe2b490a\") " pod="openshift-marketplace/community-operators-kdnxf" Nov 25 11:50:33 crc kubenswrapper[4706]: I1125 11:50:33.192215 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b88b3a8-9948-44ff-980e-3775fe2b490a-catalog-content\") pod \"community-operators-kdnxf\" (UID: \"1b88b3a8-9948-44ff-980e-3775fe2b490a\") " pod="openshift-marketplace/community-operators-kdnxf" Nov 25 11:50:33 crc kubenswrapper[4706]: I1125 11:50:33.192243 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b88b3a8-9948-44ff-980e-3775fe2b490a-utilities\") pod \"community-operators-kdnxf\" (UID: \"1b88b3a8-9948-44ff-980e-3775fe2b490a\") " pod="openshift-marketplace/community-operators-kdnxf" Nov 25 11:50:33 crc kubenswrapper[4706]: I1125 11:50:33.221001 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dld28\" (UniqueName: \"kubernetes.io/projected/1b88b3a8-9948-44ff-980e-3775fe2b490a-kube-api-access-dld28\") pod \"community-operators-kdnxf\" (UID: \"1b88b3a8-9948-44ff-980e-3775fe2b490a\") " pod="openshift-marketplace/community-operators-kdnxf" Nov 25 11:50:33 crc kubenswrapper[4706]: I1125 11:50:33.401750 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kdnxf" Nov 25 11:50:34 crc kubenswrapper[4706]: I1125 11:50:34.001788 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kdnxf"] Nov 25 11:50:34 crc kubenswrapper[4706]: I1125 11:50:34.657041 4706 generic.go:334] "Generic (PLEG): container finished" podID="1b88b3a8-9948-44ff-980e-3775fe2b490a" containerID="d4b4807cca29526a385028df4035f412160df25403ddc137beb0f57b7727e73f" exitCode=0 Nov 25 11:50:34 crc kubenswrapper[4706]: I1125 11:50:34.657161 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kdnxf" event={"ID":"1b88b3a8-9948-44ff-980e-3775fe2b490a","Type":"ContainerDied","Data":"d4b4807cca29526a385028df4035f412160df25403ddc137beb0f57b7727e73f"} Nov 25 11:50:34 crc kubenswrapper[4706]: I1125 11:50:34.657611 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kdnxf" event={"ID":"1b88b3a8-9948-44ff-980e-3775fe2b490a","Type":"ContainerStarted","Data":"aa62695421f037a09fb67461481076fb358086250349e6953b24fba01a77e153"} Nov 25 11:50:35 crc kubenswrapper[4706]: I1125 11:50:35.667185 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kdnxf" event={"ID":"1b88b3a8-9948-44ff-980e-3775fe2b490a","Type":"ContainerStarted","Data":"7e4b99c3ad5fb6d96918d71d0174b3fbcbe90ba216d9008a50d7da7ceedb2464"} Nov 25 11:50:36 crc kubenswrapper[4706]: I1125 11:50:36.019443 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gtm4k" Nov 25 11:50:36 crc kubenswrapper[4706]: I1125 11:50:36.019516 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gtm4k" Nov 25 11:50:36 crc kubenswrapper[4706]: I1125 11:50:36.062830 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gtm4k" Nov 25 11:50:36 crc kubenswrapper[4706]: I1125 11:50:36.676252 4706 generic.go:334] "Generic (PLEG): container finished" podID="1b88b3a8-9948-44ff-980e-3775fe2b490a" containerID="7e4b99c3ad5fb6d96918d71d0174b3fbcbe90ba216d9008a50d7da7ceedb2464" exitCode=0 Nov 25 11:50:36 crc kubenswrapper[4706]: I1125 11:50:36.676391 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kdnxf" event={"ID":"1b88b3a8-9948-44ff-980e-3775fe2b490a","Type":"ContainerDied","Data":"7e4b99c3ad5fb6d96918d71d0174b3fbcbe90ba216d9008a50d7da7ceedb2464"} Nov 25 11:50:36 crc kubenswrapper[4706]: I1125 11:50:36.739092 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gtm4k" Nov 25 11:50:37 crc kubenswrapper[4706]: I1125 11:50:37.683886 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kdnxf" event={"ID":"1b88b3a8-9948-44ff-980e-3775fe2b490a","Type":"ContainerStarted","Data":"51c5d239977319687f3e244c57061ce79ea0e5965a1df4fbd2b09b6d4a9ee36d"} Nov 25 11:50:37 crc kubenswrapper[4706]: I1125 11:50:37.705748 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kdnxf" podStartSLOduration=2.007979551 podStartE2EDuration="4.705717687s" podCreationTimestamp="2025-11-25 11:50:33 +0000 UTC" firstStartedPulling="2025-11-25 11:50:34.659066262 +0000 UTC m=+843.573623643" lastFinishedPulling="2025-11-25 11:50:37.356804398 +0000 UTC m=+846.271361779" observedRunningTime="2025-11-25 11:50:37.702221989 +0000 UTC m=+846.616779380" watchObservedRunningTime="2025-11-25 11:50:37.705717687 +0000 UTC m=+846.620275068" Nov 25 11:50:37 crc kubenswrapper[4706]: I1125 11:50:37.747937 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7c9ff6b49c-x86mq" Nov 25 11:50:39 crc kubenswrapper[4706]: I1125 11:50:39.677058 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gtm4k"] Nov 25 11:50:39 crc kubenswrapper[4706]: I1125 11:50:39.677423 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gtm4k" podUID="1ddcee80-0df0-4b20-bfac-603fe90a4f95" containerName="registry-server" containerID="cri-o://2dd0b0fc406f0a68910c468093b1fc7446a17d130e1dff275be884822c6aef24" gracePeriod=2 Nov 25 11:50:40 crc kubenswrapper[4706]: I1125 11:50:40.042475 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gtm4k" Nov 25 11:50:40 crc kubenswrapper[4706]: I1125 11:50:40.203535 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ddcee80-0df0-4b20-bfac-603fe90a4f95-catalog-content\") pod \"1ddcee80-0df0-4b20-bfac-603fe90a4f95\" (UID: \"1ddcee80-0df0-4b20-bfac-603fe90a4f95\") " Nov 25 11:50:40 crc kubenswrapper[4706]: I1125 11:50:40.203596 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9x2f\" (UniqueName: \"kubernetes.io/projected/1ddcee80-0df0-4b20-bfac-603fe90a4f95-kube-api-access-l9x2f\") pod \"1ddcee80-0df0-4b20-bfac-603fe90a4f95\" (UID: \"1ddcee80-0df0-4b20-bfac-603fe90a4f95\") " Nov 25 11:50:40 crc kubenswrapper[4706]: I1125 11:50:40.203622 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ddcee80-0df0-4b20-bfac-603fe90a4f95-utilities\") pod \"1ddcee80-0df0-4b20-bfac-603fe90a4f95\" (UID: \"1ddcee80-0df0-4b20-bfac-603fe90a4f95\") " Nov 25 11:50:40 crc kubenswrapper[4706]: I1125 11:50:40.207384 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ddcee80-0df0-4b20-bfac-603fe90a4f95-utilities" (OuterVolumeSpecName: "utilities") pod "1ddcee80-0df0-4b20-bfac-603fe90a4f95" (UID: "1ddcee80-0df0-4b20-bfac-603fe90a4f95"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:50:40 crc kubenswrapper[4706]: I1125 11:50:40.212587 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ddcee80-0df0-4b20-bfac-603fe90a4f95-kube-api-access-l9x2f" (OuterVolumeSpecName: "kube-api-access-l9x2f") pod "1ddcee80-0df0-4b20-bfac-603fe90a4f95" (UID: "1ddcee80-0df0-4b20-bfac-603fe90a4f95"). InnerVolumeSpecName "kube-api-access-l9x2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:50:40 crc kubenswrapper[4706]: I1125 11:50:40.263212 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ddcee80-0df0-4b20-bfac-603fe90a4f95-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ddcee80-0df0-4b20-bfac-603fe90a4f95" (UID: "1ddcee80-0df0-4b20-bfac-603fe90a4f95"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:50:40 crc kubenswrapper[4706]: I1125 11:50:40.305460 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ddcee80-0df0-4b20-bfac-603fe90a4f95-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 11:50:40 crc kubenswrapper[4706]: I1125 11:50:40.305507 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9x2f\" (UniqueName: \"kubernetes.io/projected/1ddcee80-0df0-4b20-bfac-603fe90a4f95-kube-api-access-l9x2f\") on node \"crc\" DevicePath \"\"" Nov 25 11:50:40 crc kubenswrapper[4706]: I1125 11:50:40.305523 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ddcee80-0df0-4b20-bfac-603fe90a4f95-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 11:50:40 crc kubenswrapper[4706]: I1125 11:50:40.710074 4706 generic.go:334] "Generic (PLEG): container finished" podID="1ddcee80-0df0-4b20-bfac-603fe90a4f95" containerID="2dd0b0fc406f0a68910c468093b1fc7446a17d130e1dff275be884822c6aef24" exitCode=0 Nov 25 11:50:40 crc kubenswrapper[4706]: I1125 11:50:40.710140 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gtm4k" event={"ID":"1ddcee80-0df0-4b20-bfac-603fe90a4f95","Type":"ContainerDied","Data":"2dd0b0fc406f0a68910c468093b1fc7446a17d130e1dff275be884822c6aef24"} Nov 25 11:50:40 crc kubenswrapper[4706]: I1125 11:50:40.710155 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gtm4k" Nov 25 11:50:40 crc kubenswrapper[4706]: I1125 11:50:40.710184 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gtm4k" event={"ID":"1ddcee80-0df0-4b20-bfac-603fe90a4f95","Type":"ContainerDied","Data":"b6de0847f338f079ffc4443ee8e4714d0c268d27143d4752a7f98572df6f79fb"} Nov 25 11:50:40 crc kubenswrapper[4706]: I1125 11:50:40.710216 4706 scope.go:117] "RemoveContainer" containerID="2dd0b0fc406f0a68910c468093b1fc7446a17d130e1dff275be884822c6aef24" Nov 25 11:50:40 crc kubenswrapper[4706]: I1125 11:50:40.731374 4706 scope.go:117] "RemoveContainer" containerID="2fcf62f66bc2f1e16ae5c10c3a320631f9d215e4791394cdcdbf41b5f67ea206" Nov 25 11:50:40 crc kubenswrapper[4706]: I1125 11:50:40.753053 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gtm4k"] Nov 25 11:50:40 crc kubenswrapper[4706]: I1125 11:50:40.758552 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gtm4k"] Nov 25 11:50:40 crc kubenswrapper[4706]: I1125 11:50:40.778315 4706 scope.go:117] "RemoveContainer" containerID="81dad17a0d67e58d58c0dd19379b5271510f88b2f74eb3b327ff80e3f7a3216c" Nov 25 11:50:40 crc kubenswrapper[4706]: I1125 11:50:40.799355 4706 scope.go:117] "RemoveContainer" containerID="2dd0b0fc406f0a68910c468093b1fc7446a17d130e1dff275be884822c6aef24" Nov 25 11:50:40 crc kubenswrapper[4706]: E1125 11:50:40.800064 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dd0b0fc406f0a68910c468093b1fc7446a17d130e1dff275be884822c6aef24\": container with ID starting with 2dd0b0fc406f0a68910c468093b1fc7446a17d130e1dff275be884822c6aef24 not found: ID does not exist" containerID="2dd0b0fc406f0a68910c468093b1fc7446a17d130e1dff275be884822c6aef24" Nov 25 11:50:40 crc kubenswrapper[4706]: I1125 11:50:40.800105 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dd0b0fc406f0a68910c468093b1fc7446a17d130e1dff275be884822c6aef24"} err="failed to get container status \"2dd0b0fc406f0a68910c468093b1fc7446a17d130e1dff275be884822c6aef24\": rpc error: code = NotFound desc = could not find container \"2dd0b0fc406f0a68910c468093b1fc7446a17d130e1dff275be884822c6aef24\": container with ID starting with 2dd0b0fc406f0a68910c468093b1fc7446a17d130e1dff275be884822c6aef24 not found: ID does not exist" Nov 25 11:50:40 crc kubenswrapper[4706]: I1125 11:50:40.800138 4706 scope.go:117] "RemoveContainer" containerID="2fcf62f66bc2f1e16ae5c10c3a320631f9d215e4791394cdcdbf41b5f67ea206" Nov 25 11:50:40 crc kubenswrapper[4706]: E1125 11:50:40.800964 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fcf62f66bc2f1e16ae5c10c3a320631f9d215e4791394cdcdbf41b5f67ea206\": container with ID starting with 2fcf62f66bc2f1e16ae5c10c3a320631f9d215e4791394cdcdbf41b5f67ea206 not found: ID does not exist" containerID="2fcf62f66bc2f1e16ae5c10c3a320631f9d215e4791394cdcdbf41b5f67ea206" Nov 25 11:50:40 crc kubenswrapper[4706]: I1125 11:50:40.801031 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fcf62f66bc2f1e16ae5c10c3a320631f9d215e4791394cdcdbf41b5f67ea206"} err="failed to get container status \"2fcf62f66bc2f1e16ae5c10c3a320631f9d215e4791394cdcdbf41b5f67ea206\": rpc error: code = NotFound desc = could not find container \"2fcf62f66bc2f1e16ae5c10c3a320631f9d215e4791394cdcdbf41b5f67ea206\": container with ID starting with 2fcf62f66bc2f1e16ae5c10c3a320631f9d215e4791394cdcdbf41b5f67ea206 not found: ID does not exist" Nov 25 11:50:40 crc kubenswrapper[4706]: I1125 11:50:40.801085 4706 scope.go:117] "RemoveContainer" containerID="81dad17a0d67e58d58c0dd19379b5271510f88b2f74eb3b327ff80e3f7a3216c" Nov 25 11:50:40 crc kubenswrapper[4706]: E1125 11:50:40.801747 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81dad17a0d67e58d58c0dd19379b5271510f88b2f74eb3b327ff80e3f7a3216c\": container with ID starting with 81dad17a0d67e58d58c0dd19379b5271510f88b2f74eb3b327ff80e3f7a3216c not found: ID does not exist" containerID="81dad17a0d67e58d58c0dd19379b5271510f88b2f74eb3b327ff80e3f7a3216c" Nov 25 11:50:40 crc kubenswrapper[4706]: I1125 11:50:40.801779 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81dad17a0d67e58d58c0dd19379b5271510f88b2f74eb3b327ff80e3f7a3216c"} err="failed to get container status \"81dad17a0d67e58d58c0dd19379b5271510f88b2f74eb3b327ff80e3f7a3216c\": rpc error: code = NotFound desc = could not find container \"81dad17a0d67e58d58c0dd19379b5271510f88b2f74eb3b327ff80e3f7a3216c\": container with ID starting with 81dad17a0d67e58d58c0dd19379b5271510f88b2f74eb3b327ff80e3f7a3216c not found: ID does not exist" Nov 25 11:50:41 crc kubenswrapper[4706]: I1125 11:50:41.931835 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ddcee80-0df0-4b20-bfac-603fe90a4f95" path="/var/lib/kubelet/pods/1ddcee80-0df0-4b20-bfac-603fe90a4f95/volumes" Nov 25 11:50:43 crc kubenswrapper[4706]: I1125 11:50:43.402632 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kdnxf" Nov 25 11:50:43 crc kubenswrapper[4706]: I1125 11:50:43.402713 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kdnxf" Nov 25 11:50:43 crc kubenswrapper[4706]: I1125 11:50:43.438623 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kdnxf" Nov 25 11:50:43 crc kubenswrapper[4706]: I1125 11:50:43.776938 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kdnxf" Nov 25 11:50:44 crc kubenswrapper[4706]: I1125 11:50:44.294378 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vxcn8"] Nov 25 11:50:44 crc kubenswrapper[4706]: E1125 11:50:44.295095 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ddcee80-0df0-4b20-bfac-603fe90a4f95" containerName="extract-content" Nov 25 11:50:44 crc kubenswrapper[4706]: I1125 11:50:44.295112 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ddcee80-0df0-4b20-bfac-603fe90a4f95" containerName="extract-content" Nov 25 11:50:44 crc kubenswrapper[4706]: E1125 11:50:44.295160 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ddcee80-0df0-4b20-bfac-603fe90a4f95" containerName="extract-utilities" Nov 25 11:50:44 crc kubenswrapper[4706]: I1125 11:50:44.295169 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ddcee80-0df0-4b20-bfac-603fe90a4f95" containerName="extract-utilities" Nov 25 11:50:44 crc kubenswrapper[4706]: E1125 11:50:44.295183 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ddcee80-0df0-4b20-bfac-603fe90a4f95" containerName="registry-server" Nov 25 11:50:44 crc kubenswrapper[4706]: I1125 11:50:44.295191 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ddcee80-0df0-4b20-bfac-603fe90a4f95" containerName="registry-server" Nov 25 11:50:44 crc kubenswrapper[4706]: I1125 11:50:44.295347 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ddcee80-0df0-4b20-bfac-603fe90a4f95" containerName="registry-server" Nov 25 11:50:44 crc kubenswrapper[4706]: I1125 11:50:44.296281 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vxcn8" Nov 25 11:50:44 crc kubenswrapper[4706]: I1125 11:50:44.314657 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vxcn8"] Nov 25 11:50:44 crc kubenswrapper[4706]: I1125 11:50:44.362200 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f050e9f9-24f9-4833-a272-b246b5ceccce-catalog-content\") pod \"redhat-marketplace-vxcn8\" (UID: \"f050e9f9-24f9-4833-a272-b246b5ceccce\") " pod="openshift-marketplace/redhat-marketplace-vxcn8" Nov 25 11:50:44 crc kubenswrapper[4706]: I1125 11:50:44.362274 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk48q\" (UniqueName: \"kubernetes.io/projected/f050e9f9-24f9-4833-a272-b246b5ceccce-kube-api-access-dk48q\") pod \"redhat-marketplace-vxcn8\" (UID: \"f050e9f9-24f9-4833-a272-b246b5ceccce\") " pod="openshift-marketplace/redhat-marketplace-vxcn8" Nov 25 11:50:44 crc kubenswrapper[4706]: I1125 11:50:44.362362 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f050e9f9-24f9-4833-a272-b246b5ceccce-utilities\") pod \"redhat-marketplace-vxcn8\" (UID: \"f050e9f9-24f9-4833-a272-b246b5ceccce\") " pod="openshift-marketplace/redhat-marketplace-vxcn8" Nov 25 11:50:44 crc kubenswrapper[4706]: I1125 11:50:44.463345 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f050e9f9-24f9-4833-a272-b246b5ceccce-catalog-content\") pod \"redhat-marketplace-vxcn8\" (UID: \"f050e9f9-24f9-4833-a272-b246b5ceccce\") " pod="openshift-marketplace/redhat-marketplace-vxcn8" Nov 25 11:50:44 crc kubenswrapper[4706]: I1125 11:50:44.463413 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk48q\" (UniqueName: \"kubernetes.io/projected/f050e9f9-24f9-4833-a272-b246b5ceccce-kube-api-access-dk48q\") pod \"redhat-marketplace-vxcn8\" (UID: \"f050e9f9-24f9-4833-a272-b246b5ceccce\") " pod="openshift-marketplace/redhat-marketplace-vxcn8" Nov 25 11:50:44 crc kubenswrapper[4706]: I1125 11:50:44.463450 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f050e9f9-24f9-4833-a272-b246b5ceccce-utilities\") pod \"redhat-marketplace-vxcn8\" (UID: \"f050e9f9-24f9-4833-a272-b246b5ceccce\") " pod="openshift-marketplace/redhat-marketplace-vxcn8" Nov 25 11:50:44 crc kubenswrapper[4706]: I1125 11:50:44.464054 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f050e9f9-24f9-4833-a272-b246b5ceccce-utilities\") pod \"redhat-marketplace-vxcn8\" (UID: \"f050e9f9-24f9-4833-a272-b246b5ceccce\") " pod="openshift-marketplace/redhat-marketplace-vxcn8" Nov 25 11:50:44 crc kubenswrapper[4706]: I1125 11:50:44.464127 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f050e9f9-24f9-4833-a272-b246b5ceccce-catalog-content\") pod \"redhat-marketplace-vxcn8\" (UID: \"f050e9f9-24f9-4833-a272-b246b5ceccce\") " pod="openshift-marketplace/redhat-marketplace-vxcn8" Nov 25 11:50:44 crc kubenswrapper[4706]: I1125 11:50:44.498066 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk48q\" (UniqueName: \"kubernetes.io/projected/f050e9f9-24f9-4833-a272-b246b5ceccce-kube-api-access-dk48q\") pod \"redhat-marketplace-vxcn8\" (UID: \"f050e9f9-24f9-4833-a272-b246b5ceccce\") " pod="openshift-marketplace/redhat-marketplace-vxcn8" Nov 25 11:50:44 crc kubenswrapper[4706]: I1125 11:50:44.617366 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vxcn8" Nov 25 11:50:44 crc kubenswrapper[4706]: I1125 11:50:44.863001 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vxcn8"] Nov 25 11:50:45 crc kubenswrapper[4706]: I1125 11:50:45.747752 4706 generic.go:334] "Generic (PLEG): container finished" podID="f050e9f9-24f9-4833-a272-b246b5ceccce" containerID="61d04dca5bb321a4990ada2a218a97d89db7986ce0c6b142bb390f6cf1c12d8f" exitCode=0 Nov 25 11:50:45 crc kubenswrapper[4706]: I1125 11:50:45.747871 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxcn8" event={"ID":"f050e9f9-24f9-4833-a272-b246b5ceccce","Type":"ContainerDied","Data":"61d04dca5bb321a4990ada2a218a97d89db7986ce0c6b142bb390f6cf1c12d8f"} Nov 25 11:50:45 crc kubenswrapper[4706]: I1125 11:50:45.748216 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxcn8" event={"ID":"f050e9f9-24f9-4833-a272-b246b5ceccce","Type":"ContainerStarted","Data":"f6da3f3d1f321107247c22d93eb9b10da5d7347b55f44f0cf9e62fa62eebce24"} Nov 25 11:50:46 crc kubenswrapper[4706]: I1125 11:50:46.675735 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kdnxf"] Nov 25 11:50:46 crc kubenswrapper[4706]: I1125 11:50:46.676045 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kdnxf" podUID="1b88b3a8-9948-44ff-980e-3775fe2b490a" containerName="registry-server" containerID="cri-o://51c5d239977319687f3e244c57061ce79ea0e5965a1df4fbd2b09b6d4a9ee36d" gracePeriod=2 Nov 25 11:50:46 crc kubenswrapper[4706]: I1125 11:50:46.755446 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxcn8" event={"ID":"f050e9f9-24f9-4833-a272-b246b5ceccce","Type":"ContainerStarted","Data":"e3da9ee60ed57adc3fe72b5104617159da854b7975d5f502a1467892abd2ba44"} Nov 25 11:50:47 crc kubenswrapper[4706]: I1125 11:50:47.296409 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" Nov 25 11:50:47 crc kubenswrapper[4706]: I1125 11:50:47.768412 4706 generic.go:334] "Generic (PLEG): container finished" podID="1b88b3a8-9948-44ff-980e-3775fe2b490a" containerID="51c5d239977319687f3e244c57061ce79ea0e5965a1df4fbd2b09b6d4a9ee36d" exitCode=0 Nov 25 11:50:47 crc kubenswrapper[4706]: I1125 11:50:47.768581 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kdnxf" event={"ID":"1b88b3a8-9948-44ff-980e-3775fe2b490a","Type":"ContainerDied","Data":"51c5d239977319687f3e244c57061ce79ea0e5965a1df4fbd2b09b6d4a9ee36d"} Nov 25 11:50:47 crc kubenswrapper[4706]: I1125 11:50:47.771865 4706 generic.go:334] "Generic (PLEG): container finished" podID="f050e9f9-24f9-4833-a272-b246b5ceccce" containerID="e3da9ee60ed57adc3fe72b5104617159da854b7975d5f502a1467892abd2ba44" exitCode=0 Nov 25 11:50:47 crc kubenswrapper[4706]: I1125 11:50:47.771901 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxcn8" event={"ID":"f050e9f9-24f9-4833-a272-b246b5ceccce","Type":"ContainerDied","Data":"e3da9ee60ed57adc3fe72b5104617159da854b7975d5f502a1467892abd2ba44"} Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.027115 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-gfpwp"] Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.035974 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-gfpwp" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.038641 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.038824 4706 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.040277 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-9gk5w"] Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.042291 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-9gk5w" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.045396 4706 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-vdkzz" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.050457 4706 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.057989 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-9gk5w"] Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.133465 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4fe1be78-8453-460d-abc1-7c4b89923fe5-reloader\") pod \"frr-k8s-gfpwp\" (UID: \"4fe1be78-8453-460d-abc1-7c4b89923fe5\") " pod="metallb-system/frr-k8s-gfpwp" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.133531 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcxc5\" (UniqueName: \"kubernetes.io/projected/4fe1be78-8453-460d-abc1-7c4b89923fe5-kube-api-access-hcxc5\") pod \"frr-k8s-gfpwp\" (UID: \"4fe1be78-8453-460d-abc1-7c4b89923fe5\") " pod="metallb-system/frr-k8s-gfpwp" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.133558 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4fe1be78-8453-460d-abc1-7c4b89923fe5-frr-conf\") pod \"frr-k8s-gfpwp\" (UID: \"4fe1be78-8453-460d-abc1-7c4b89923fe5\") " pod="metallb-system/frr-k8s-gfpwp" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.133622 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4fe1be78-8453-460d-abc1-7c4b89923fe5-frr-sockets\") pod \"frr-k8s-gfpwp\" (UID: \"4fe1be78-8453-460d-abc1-7c4b89923fe5\") " pod="metallb-system/frr-k8s-gfpwp" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.133644 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4fe1be78-8453-460d-abc1-7c4b89923fe5-frr-startup\") pod \"frr-k8s-gfpwp\" (UID: \"4fe1be78-8453-460d-abc1-7c4b89923fe5\") " pod="metallb-system/frr-k8s-gfpwp" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.133669 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4fe1be78-8453-460d-abc1-7c4b89923fe5-metrics-certs\") pod \"frr-k8s-gfpwp\" (UID: \"4fe1be78-8453-460d-abc1-7c4b89923fe5\") " pod="metallb-system/frr-k8s-gfpwp" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.133707 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4fe1be78-8453-460d-abc1-7c4b89923fe5-metrics\") pod \"frr-k8s-gfpwp\" (UID: \"4fe1be78-8453-460d-abc1-7c4b89923fe5\") " pod="metallb-system/frr-k8s-gfpwp" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.150269 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-2w52p"] Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.152154 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-2w52p" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.158837 4706 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.158886 4706 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.158890 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.159437 4706 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-nhh4t" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.186682 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-5gnwd"] Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.188101 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-5gnwd" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.196802 4706 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.212061 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-5gnwd"] Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.235210 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcxc5\" (UniqueName: \"kubernetes.io/projected/4fe1be78-8453-460d-abc1-7c4b89923fe5-kube-api-access-hcxc5\") pod \"frr-k8s-gfpwp\" (UID: \"4fe1be78-8453-460d-abc1-7c4b89923fe5\") " pod="metallb-system/frr-k8s-gfpwp" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.235279 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4fe1be78-8453-460d-abc1-7c4b89923fe5-frr-conf\") pod \"frr-k8s-gfpwp\" (UID: \"4fe1be78-8453-460d-abc1-7c4b89923fe5\") " pod="metallb-system/frr-k8s-gfpwp" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.235357 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm7n9\" (UniqueName: \"kubernetes.io/projected/d6a1f7a2-b220-49a7-b12a-8cc3cf093dbc-kube-api-access-lm7n9\") pod \"frr-k8s-webhook-server-6998585d5-9gk5w\" (UID: \"d6a1f7a2-b220-49a7-b12a-8cc3cf093dbc\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-9gk5w" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.235414 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4fe1be78-8453-460d-abc1-7c4b89923fe5-frr-sockets\") pod \"frr-k8s-gfpwp\" (UID: \"4fe1be78-8453-460d-abc1-7c4b89923fe5\") " pod="metallb-system/frr-k8s-gfpwp" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.235442 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6a1f7a2-b220-49a7-b12a-8cc3cf093dbc-cert\") pod \"frr-k8s-webhook-server-6998585d5-9gk5w\" (UID: \"d6a1f7a2-b220-49a7-b12a-8cc3cf093dbc\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-9gk5w" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.235463 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4fe1be78-8453-460d-abc1-7c4b89923fe5-frr-startup\") pod \"frr-k8s-gfpwp\" (UID: \"4fe1be78-8453-460d-abc1-7c4b89923fe5\") " pod="metallb-system/frr-k8s-gfpwp" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.235493 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4fe1be78-8453-460d-abc1-7c4b89923fe5-metrics-certs\") pod \"frr-k8s-gfpwp\" (UID: \"4fe1be78-8453-460d-abc1-7c4b89923fe5\") " pod="metallb-system/frr-k8s-gfpwp" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.235531 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4fe1be78-8453-460d-abc1-7c4b89923fe5-metrics\") pod \"frr-k8s-gfpwp\" (UID: \"4fe1be78-8453-460d-abc1-7c4b89923fe5\") " pod="metallb-system/frr-k8s-gfpwp" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.235580 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4fe1be78-8453-460d-abc1-7c4b89923fe5-reloader\") pod \"frr-k8s-gfpwp\" (UID: \"4fe1be78-8453-460d-abc1-7c4b89923fe5\") " pod="metallb-system/frr-k8s-gfpwp" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.236383 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4fe1be78-8453-460d-abc1-7c4b89923fe5-metrics\") pod \"frr-k8s-gfpwp\" (UID: \"4fe1be78-8453-460d-abc1-7c4b89923fe5\") " pod="metallb-system/frr-k8s-gfpwp" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.236390 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4fe1be78-8453-460d-abc1-7c4b89923fe5-frr-sockets\") pod \"frr-k8s-gfpwp\" (UID: \"4fe1be78-8453-460d-abc1-7c4b89923fe5\") " pod="metallb-system/frr-k8s-gfpwp" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.237299 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4fe1be78-8453-460d-abc1-7c4b89923fe5-frr-conf\") pod \"frr-k8s-gfpwp\" (UID: \"4fe1be78-8453-460d-abc1-7c4b89923fe5\") " pod="metallb-system/frr-k8s-gfpwp" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.238775 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4fe1be78-8453-460d-abc1-7c4b89923fe5-frr-startup\") pod \"frr-k8s-gfpwp\" (UID: \"4fe1be78-8453-460d-abc1-7c4b89923fe5\") " pod="metallb-system/frr-k8s-gfpwp" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.240677 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4fe1be78-8453-460d-abc1-7c4b89923fe5-reloader\") pod \"frr-k8s-gfpwp\" (UID: \"4fe1be78-8453-460d-abc1-7c4b89923fe5\") " pod="metallb-system/frr-k8s-gfpwp" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.246773 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4fe1be78-8453-460d-abc1-7c4b89923fe5-metrics-certs\") pod \"frr-k8s-gfpwp\" (UID: \"4fe1be78-8453-460d-abc1-7c4b89923fe5\") " pod="metallb-system/frr-k8s-gfpwp" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.271606 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcxc5\" (UniqueName: \"kubernetes.io/projected/4fe1be78-8453-460d-abc1-7c4b89923fe5-kube-api-access-hcxc5\") pod \"frr-k8s-gfpwp\" (UID: \"4fe1be78-8453-460d-abc1-7c4b89923fe5\") " pod="metallb-system/frr-k8s-gfpwp" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.337761 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5570c11b-30c6-4ba6-adb5-3fc12ca26ae9-metrics-certs\") pod \"speaker-2w52p\" (UID: \"5570c11b-30c6-4ba6-adb5-3fc12ca26ae9\") " pod="metallb-system/speaker-2w52p" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.337834 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5570c11b-30c6-4ba6-adb5-3fc12ca26ae9-memberlist\") pod \"speaker-2w52p\" (UID: \"5570c11b-30c6-4ba6-adb5-3fc12ca26ae9\") " pod="metallb-system/speaker-2w52p" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.337886 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/5570c11b-30c6-4ba6-adb5-3fc12ca26ae9-metallb-excludel2\") pod \"speaker-2w52p\" (UID: \"5570c11b-30c6-4ba6-adb5-3fc12ca26ae9\") " pod="metallb-system/speaker-2w52p" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.337919 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvlbs\" (UniqueName: \"kubernetes.io/projected/67dd43bc-7fe1-4585-8fc3-2d2a52b8c974-kube-api-access-kvlbs\") pod \"controller-6c7b4b5f48-5gnwd\" (UID: \"67dd43bc-7fe1-4585-8fc3-2d2a52b8c974\") " pod="metallb-system/controller-6c7b4b5f48-5gnwd" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.337981 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm7n9\" (UniqueName: \"kubernetes.io/projected/d6a1f7a2-b220-49a7-b12a-8cc3cf093dbc-kube-api-access-lm7n9\") pod \"frr-k8s-webhook-server-6998585d5-9gk5w\" (UID: \"d6a1f7a2-b220-49a7-b12a-8cc3cf093dbc\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-9gk5w" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.338271 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6a1f7a2-b220-49a7-b12a-8cc3cf093dbc-cert\") pod \"frr-k8s-webhook-server-6998585d5-9gk5w\" (UID: \"d6a1f7a2-b220-49a7-b12a-8cc3cf093dbc\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-9gk5w" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.338446 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67dd43bc-7fe1-4585-8fc3-2d2a52b8c974-metrics-certs\") pod \"controller-6c7b4b5f48-5gnwd\" (UID: \"67dd43bc-7fe1-4585-8fc3-2d2a52b8c974\") " pod="metallb-system/controller-6c7b4b5f48-5gnwd" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.338500 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/67dd43bc-7fe1-4585-8fc3-2d2a52b8c974-cert\") pod \"controller-6c7b4b5f48-5gnwd\" (UID: \"67dd43bc-7fe1-4585-8fc3-2d2a52b8c974\") " pod="metallb-system/controller-6c7b4b5f48-5gnwd" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.338557 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft5xd\" (UniqueName: \"kubernetes.io/projected/5570c11b-30c6-4ba6-adb5-3fc12ca26ae9-kube-api-access-ft5xd\") pod \"speaker-2w52p\" (UID: \"5570c11b-30c6-4ba6-adb5-3fc12ca26ae9\") " pod="metallb-system/speaker-2w52p" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.343342 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6a1f7a2-b220-49a7-b12a-8cc3cf093dbc-cert\") pod \"frr-k8s-webhook-server-6998585d5-9gk5w\" (UID: \"d6a1f7a2-b220-49a7-b12a-8cc3cf093dbc\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-9gk5w" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.345282 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kdnxf" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.359227 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm7n9\" (UniqueName: \"kubernetes.io/projected/d6a1f7a2-b220-49a7-b12a-8cc3cf093dbc-kube-api-access-lm7n9\") pod \"frr-k8s-webhook-server-6998585d5-9gk5w\" (UID: \"d6a1f7a2-b220-49a7-b12a-8cc3cf093dbc\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-9gk5w" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.371732 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-gfpwp" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.386461 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-9gk5w" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.440163 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67dd43bc-7fe1-4585-8fc3-2d2a52b8c974-metrics-certs\") pod \"controller-6c7b4b5f48-5gnwd\" (UID: \"67dd43bc-7fe1-4585-8fc3-2d2a52b8c974\") " pod="metallb-system/controller-6c7b4b5f48-5gnwd" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.440211 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/67dd43bc-7fe1-4585-8fc3-2d2a52b8c974-cert\") pod \"controller-6c7b4b5f48-5gnwd\" (UID: \"67dd43bc-7fe1-4585-8fc3-2d2a52b8c974\") " pod="metallb-system/controller-6c7b4b5f48-5gnwd" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.440235 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft5xd\" (UniqueName: \"kubernetes.io/projected/5570c11b-30c6-4ba6-adb5-3fc12ca26ae9-kube-api-access-ft5xd\") pod \"speaker-2w52p\" (UID: \"5570c11b-30c6-4ba6-adb5-3fc12ca26ae9\") " pod="metallb-system/speaker-2w52p" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.440280 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5570c11b-30c6-4ba6-adb5-3fc12ca26ae9-metrics-certs\") pod \"speaker-2w52p\" (UID: \"5570c11b-30c6-4ba6-adb5-3fc12ca26ae9\") " pod="metallb-system/speaker-2w52p" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.440302 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5570c11b-30c6-4ba6-adb5-3fc12ca26ae9-memberlist\") pod \"speaker-2w52p\" (UID: \"5570c11b-30c6-4ba6-adb5-3fc12ca26ae9\") " pod="metallb-system/speaker-2w52p" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.440346 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/5570c11b-30c6-4ba6-adb5-3fc12ca26ae9-metallb-excludel2\") pod \"speaker-2w52p\" (UID: \"5570c11b-30c6-4ba6-adb5-3fc12ca26ae9\") " pod="metallb-system/speaker-2w52p" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.440368 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvlbs\" (UniqueName: \"kubernetes.io/projected/67dd43bc-7fe1-4585-8fc3-2d2a52b8c974-kube-api-access-kvlbs\") pod \"controller-6c7b4b5f48-5gnwd\" (UID: \"67dd43bc-7fe1-4585-8fc3-2d2a52b8c974\") " pod="metallb-system/controller-6c7b4b5f48-5gnwd" Nov 25 11:50:48 crc kubenswrapper[4706]: E1125 11:50:48.440768 4706 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 25 11:50:48 crc kubenswrapper[4706]: E1125 11:50:48.440829 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5570c11b-30c6-4ba6-adb5-3fc12ca26ae9-memberlist podName:5570c11b-30c6-4ba6-adb5-3fc12ca26ae9 nodeName:}" failed. No retries permitted until 2025-11-25 11:50:48.940807348 +0000 UTC m=+857.855364729 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/5570c11b-30c6-4ba6-adb5-3fc12ca26ae9-memberlist") pod "speaker-2w52p" (UID: "5570c11b-30c6-4ba6-adb5-3fc12ca26ae9") : secret "metallb-memberlist" not found Nov 25 11:50:48 crc kubenswrapper[4706]: E1125 11:50:48.441046 4706 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Nov 25 11:50:48 crc kubenswrapper[4706]: E1125 11:50:48.441253 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5570c11b-30c6-4ba6-adb5-3fc12ca26ae9-metrics-certs podName:5570c11b-30c6-4ba6-adb5-3fc12ca26ae9 nodeName:}" failed. No retries permitted until 2025-11-25 11:50:48.941216278 +0000 UTC m=+857.855773799 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5570c11b-30c6-4ba6-adb5-3fc12ca26ae9-metrics-certs") pod "speaker-2w52p" (UID: "5570c11b-30c6-4ba6-adb5-3fc12ca26ae9") : secret "speaker-certs-secret" not found Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.441728 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/5570c11b-30c6-4ba6-adb5-3fc12ca26ae9-metallb-excludel2\") pod \"speaker-2w52p\" (UID: \"5570c11b-30c6-4ba6-adb5-3fc12ca26ae9\") " pod="metallb-system/speaker-2w52p" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.444825 4706 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.450105 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67dd43bc-7fe1-4585-8fc3-2d2a52b8c974-metrics-certs\") pod \"controller-6c7b4b5f48-5gnwd\" (UID: \"67dd43bc-7fe1-4585-8fc3-2d2a52b8c974\") " pod="metallb-system/controller-6c7b4b5f48-5gnwd" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.455831 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/67dd43bc-7fe1-4585-8fc3-2d2a52b8c974-cert\") pod \"controller-6c7b4b5f48-5gnwd\" (UID: \"67dd43bc-7fe1-4585-8fc3-2d2a52b8c974\") " pod="metallb-system/controller-6c7b4b5f48-5gnwd" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.467459 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft5xd\" (UniqueName: \"kubernetes.io/projected/5570c11b-30c6-4ba6-adb5-3fc12ca26ae9-kube-api-access-ft5xd\") pod \"speaker-2w52p\" (UID: \"5570c11b-30c6-4ba6-adb5-3fc12ca26ae9\") " pod="metallb-system/speaker-2w52p" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.479134 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvlbs\" (UniqueName: \"kubernetes.io/projected/67dd43bc-7fe1-4585-8fc3-2d2a52b8c974-kube-api-access-kvlbs\") pod \"controller-6c7b4b5f48-5gnwd\" (UID: \"67dd43bc-7fe1-4585-8fc3-2d2a52b8c974\") " pod="metallb-system/controller-6c7b4b5f48-5gnwd" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.517896 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-5gnwd" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.544201 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b88b3a8-9948-44ff-980e-3775fe2b490a-utilities\") pod \"1b88b3a8-9948-44ff-980e-3775fe2b490a\" (UID: \"1b88b3a8-9948-44ff-980e-3775fe2b490a\") " Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.544306 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dld28\" (UniqueName: \"kubernetes.io/projected/1b88b3a8-9948-44ff-980e-3775fe2b490a-kube-api-access-dld28\") pod \"1b88b3a8-9948-44ff-980e-3775fe2b490a\" (UID: \"1b88b3a8-9948-44ff-980e-3775fe2b490a\") " Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.544433 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b88b3a8-9948-44ff-980e-3775fe2b490a-catalog-content\") pod \"1b88b3a8-9948-44ff-980e-3775fe2b490a\" (UID: \"1b88b3a8-9948-44ff-980e-3775fe2b490a\") " Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.546472 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b88b3a8-9948-44ff-980e-3775fe2b490a-utilities" (OuterVolumeSpecName: "utilities") pod "1b88b3a8-9948-44ff-980e-3775fe2b490a" (UID: "1b88b3a8-9948-44ff-980e-3775fe2b490a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.560332 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b88b3a8-9948-44ff-980e-3775fe2b490a-kube-api-access-dld28" (OuterVolumeSpecName: "kube-api-access-dld28") pod "1b88b3a8-9948-44ff-980e-3775fe2b490a" (UID: "1b88b3a8-9948-44ff-980e-3775fe2b490a"). InnerVolumeSpecName "kube-api-access-dld28". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.614288 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b88b3a8-9948-44ff-980e-3775fe2b490a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b88b3a8-9948-44ff-980e-3775fe2b490a" (UID: "1b88b3a8-9948-44ff-980e-3775fe2b490a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.647056 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b88b3a8-9948-44ff-980e-3775fe2b490a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.647092 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b88b3a8-9948-44ff-980e-3775fe2b490a-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.647103 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dld28\" (UniqueName: \"kubernetes.io/projected/1b88b3a8-9948-44ff-980e-3775fe2b490a-kube-api-access-dld28\") on node \"crc\" DevicePath \"\"" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.791392 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxcn8" event={"ID":"f050e9f9-24f9-4833-a272-b246b5ceccce","Type":"ContainerStarted","Data":"0b8794f06a932013c3a38945f9182abef519de5642200f1080f1e8a5359a03b2"} Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.801156 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gfpwp" event={"ID":"4fe1be78-8453-460d-abc1-7c4b89923fe5","Type":"ContainerStarted","Data":"1d5ed721e58f452f7d5f6dfbdda55bc95d2d02e3f964b443f69fcecd50c9be30"} Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.803567 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kdnxf" event={"ID":"1b88b3a8-9948-44ff-980e-3775fe2b490a","Type":"ContainerDied","Data":"aa62695421f037a09fb67461481076fb358086250349e6953b24fba01a77e153"} Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.803619 4706 scope.go:117] "RemoveContainer" containerID="51c5d239977319687f3e244c57061ce79ea0e5965a1df4fbd2b09b6d4a9ee36d" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.803666 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kdnxf" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.813261 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vxcn8" podStartSLOduration=2.3483691540000002 podStartE2EDuration="4.813208105s" podCreationTimestamp="2025-11-25 11:50:44 +0000 UTC" firstStartedPulling="2025-11-25 11:50:45.750114011 +0000 UTC m=+854.664671392" lastFinishedPulling="2025-11-25 11:50:48.214952962 +0000 UTC m=+857.129510343" observedRunningTime="2025-11-25 11:50:48.810747973 +0000 UTC m=+857.725305354" watchObservedRunningTime="2025-11-25 11:50:48.813208105 +0000 UTC m=+857.727765486" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.860513 4706 scope.go:117] "RemoveContainer" containerID="7e4b99c3ad5fb6d96918d71d0174b3fbcbe90ba216d9008a50d7da7ceedb2464" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.864502 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kdnxf"] Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.869931 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kdnxf"] Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.880834 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-9gk5w"] Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.885488 4706 scope.go:117] "RemoveContainer" containerID="d4b4807cca29526a385028df4035f412160df25403ddc137beb0f57b7727e73f" Nov 25 11:50:48 crc kubenswrapper[4706]: W1125 11:50:48.891989 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6a1f7a2_b220_49a7_b12a_8cc3cf093dbc.slice/crio-cadeda5c95c95e1c84aaf2b82c8d244ee7c464c80625802d656c5cea13cbd879 WatchSource:0}: Error finding container cadeda5c95c95e1c84aaf2b82c8d244ee7c464c80625802d656c5cea13cbd879: Status 404 returned error can't find the container with id cadeda5c95c95e1c84aaf2b82c8d244ee7c464c80625802d656c5cea13cbd879 Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.952201 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5570c11b-30c6-4ba6-adb5-3fc12ca26ae9-metrics-certs\") pod \"speaker-2w52p\" (UID: \"5570c11b-30c6-4ba6-adb5-3fc12ca26ae9\") " pod="metallb-system/speaker-2w52p" Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.952803 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5570c11b-30c6-4ba6-adb5-3fc12ca26ae9-memberlist\") pod \"speaker-2w52p\" (UID: \"5570c11b-30c6-4ba6-adb5-3fc12ca26ae9\") " pod="metallb-system/speaker-2w52p" Nov 25 11:50:48 crc kubenswrapper[4706]: E1125 11:50:48.953512 4706 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 25 11:50:48 crc kubenswrapper[4706]: E1125 11:50:48.953704 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5570c11b-30c6-4ba6-adb5-3fc12ca26ae9-memberlist podName:5570c11b-30c6-4ba6-adb5-3fc12ca26ae9 nodeName:}" failed. No retries permitted until 2025-11-25 11:50:49.953673222 +0000 UTC m=+858.868230613 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/5570c11b-30c6-4ba6-adb5-3fc12ca26ae9-memberlist") pod "speaker-2w52p" (UID: "5570c11b-30c6-4ba6-adb5-3fc12ca26ae9") : secret "metallb-memberlist" not found Nov 25 11:50:48 crc kubenswrapper[4706]: I1125 11:50:48.962677 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5570c11b-30c6-4ba6-adb5-3fc12ca26ae9-metrics-certs\") pod \"speaker-2w52p\" (UID: \"5570c11b-30c6-4ba6-adb5-3fc12ca26ae9\") " pod="metallb-system/speaker-2w52p" Nov 25 11:50:49 crc kubenswrapper[4706]: I1125 11:50:49.035963 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-5gnwd"] Nov 25 11:50:49 crc kubenswrapper[4706]: I1125 11:50:49.811447 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-9gk5w" event={"ID":"d6a1f7a2-b220-49a7-b12a-8cc3cf093dbc","Type":"ContainerStarted","Data":"cadeda5c95c95e1c84aaf2b82c8d244ee7c464c80625802d656c5cea13cbd879"} Nov 25 11:50:49 crc kubenswrapper[4706]: I1125 11:50:49.814858 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-5gnwd" event={"ID":"67dd43bc-7fe1-4585-8fc3-2d2a52b8c974","Type":"ContainerStarted","Data":"591e255c89abf9e44150bbc9d8c40caf706a8945da2f8d694556a97c0751e50e"} Nov 25 11:50:49 crc kubenswrapper[4706]: I1125 11:50:49.814897 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-5gnwd" event={"ID":"67dd43bc-7fe1-4585-8fc3-2d2a52b8c974","Type":"ContainerStarted","Data":"486a70679c8ce8b7ec2116ff989909c70e1f76e428a793b69e8a35185b21fbe2"} Nov 25 11:50:49 crc kubenswrapper[4706]: I1125 11:50:49.814914 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-5gnwd" event={"ID":"67dd43bc-7fe1-4585-8fc3-2d2a52b8c974","Type":"ContainerStarted","Data":"4bf2d19030a74f50c46b9a7e992957d8821862bcf1167c3185f9b372ac53d3ec"} Nov 25 11:50:49 crc kubenswrapper[4706]: I1125 11:50:49.815218 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-5gnwd" Nov 25 11:50:49 crc kubenswrapper[4706]: I1125 11:50:49.838419 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-5gnwd" podStartSLOduration=1.838391599 podStartE2EDuration="1.838391599s" podCreationTimestamp="2025-11-25 11:50:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:50:49.832922752 +0000 UTC m=+858.747480123" watchObservedRunningTime="2025-11-25 11:50:49.838391599 +0000 UTC m=+858.752948980" Nov 25 11:50:49 crc kubenswrapper[4706]: I1125 11:50:49.934425 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b88b3a8-9948-44ff-980e-3775fe2b490a" path="/var/lib/kubelet/pods/1b88b3a8-9948-44ff-980e-3775fe2b490a/volumes" Nov 25 11:50:49 crc kubenswrapper[4706]: I1125 11:50:49.968324 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5570c11b-30c6-4ba6-adb5-3fc12ca26ae9-memberlist\") pod \"speaker-2w52p\" (UID: \"5570c11b-30c6-4ba6-adb5-3fc12ca26ae9\") " pod="metallb-system/speaker-2w52p" Nov 25 11:50:49 crc kubenswrapper[4706]: I1125 11:50:49.973658 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5570c11b-30c6-4ba6-adb5-3fc12ca26ae9-memberlist\") pod \"speaker-2w52p\" (UID: \"5570c11b-30c6-4ba6-adb5-3fc12ca26ae9\") " pod="metallb-system/speaker-2w52p" Nov 25 11:50:49 crc kubenswrapper[4706]: I1125 11:50:49.980851 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-2w52p" Nov 25 11:50:50 crc kubenswrapper[4706]: I1125 11:50:50.834948 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2w52p" event={"ID":"5570c11b-30c6-4ba6-adb5-3fc12ca26ae9","Type":"ContainerStarted","Data":"834bfe1fd17fcf492a40ecdaf9d6f883c52190f7858853a6a2930fdee5211e0c"} Nov 25 11:50:50 crc kubenswrapper[4706]: I1125 11:50:50.835294 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2w52p" event={"ID":"5570c11b-30c6-4ba6-adb5-3fc12ca26ae9","Type":"ContainerStarted","Data":"f818610a707be6aa8b69f57315a32265d3ffedf87d025f2e12e47e8afecbcd69"} Nov 25 11:50:50 crc kubenswrapper[4706]: I1125 11:50:50.835333 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2w52p" event={"ID":"5570c11b-30c6-4ba6-adb5-3fc12ca26ae9","Type":"ContainerStarted","Data":"dcea697e7d779b174cc5028e7489e2adb998e613c202b23b74040ed6173e3af9"} Nov 25 11:50:50 crc kubenswrapper[4706]: I1125 11:50:50.836085 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-2w52p" Nov 25 11:50:50 crc kubenswrapper[4706]: I1125 11:50:50.890342 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-2w52p" podStartSLOduration=2.890287733 podStartE2EDuration="2.890287733s" podCreationTimestamp="2025-11-25 11:50:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:50:50.888378135 +0000 UTC m=+859.802935526" watchObservedRunningTime="2025-11-25 11:50:50.890287733 +0000 UTC m=+859.804845114" Nov 25 11:50:54 crc kubenswrapper[4706]: I1125 11:50:54.617896 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vxcn8" Nov 25 11:50:54 crc kubenswrapper[4706]: I1125 11:50:54.618813 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vxcn8" Nov 25 11:50:54 crc kubenswrapper[4706]: I1125 11:50:54.692561 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vxcn8" Nov 25 11:50:54 crc kubenswrapper[4706]: I1125 11:50:54.912216 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vxcn8" Nov 25 11:50:54 crc kubenswrapper[4706]: I1125 11:50:54.963869 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vxcn8"] Nov 25 11:50:56 crc kubenswrapper[4706]: I1125 11:50:56.883433 4706 generic.go:334] "Generic (PLEG): container finished" podID="4fe1be78-8453-460d-abc1-7c4b89923fe5" containerID="be24810b0f27eefc73e1a97fee42a8dacef5e65904cccb8f2bc906b2dbc3fb8c" exitCode=0 Nov 25 11:50:56 crc kubenswrapper[4706]: I1125 11:50:56.884294 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gfpwp" event={"ID":"4fe1be78-8453-460d-abc1-7c4b89923fe5","Type":"ContainerDied","Data":"be24810b0f27eefc73e1a97fee42a8dacef5e65904cccb8f2bc906b2dbc3fb8c"} Nov 25 11:50:56 crc kubenswrapper[4706]: I1125 11:50:56.887145 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-9gk5w" event={"ID":"d6a1f7a2-b220-49a7-b12a-8cc3cf093dbc","Type":"ContainerStarted","Data":"5efbc3b579e9509edb217da6aebcefabf04a34fa06e2468ffed2b32ca338fa6d"} Nov 25 11:50:56 crc kubenswrapper[4706]: I1125 11:50:56.887144 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vxcn8" podUID="f050e9f9-24f9-4833-a272-b246b5ceccce" containerName="registry-server" containerID="cri-o://0b8794f06a932013c3a38945f9182abef519de5642200f1080f1e8a5359a03b2" gracePeriod=2 Nov 25 11:50:56 crc kubenswrapper[4706]: I1125 11:50:56.887376 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-9gk5w" Nov 25 11:50:57 crc kubenswrapper[4706]: I1125 11:50:57.562494 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-9gk5w" podStartSLOduration=2.351688274 podStartE2EDuration="9.562473862s" podCreationTimestamp="2025-11-25 11:50:48 +0000 UTC" firstStartedPulling="2025-11-25 11:50:48.894989082 +0000 UTC m=+857.809546463" lastFinishedPulling="2025-11-25 11:50:56.10577467 +0000 UTC m=+865.020332051" observedRunningTime="2025-11-25 11:50:57.559525328 +0000 UTC m=+866.474082709" watchObservedRunningTime="2025-11-25 11:50:57.562473862 +0000 UTC m=+866.477031243" Nov 25 11:50:57 crc kubenswrapper[4706]: I1125 11:50:57.897595 4706 generic.go:334] "Generic (PLEG): container finished" podID="4fe1be78-8453-460d-abc1-7c4b89923fe5" containerID="ff694ebea0c81a7a44e59e50002da9f6ffad8c591b0501771edc85797fb1f14e" exitCode=0 Nov 25 11:50:57 crc kubenswrapper[4706]: I1125 11:50:57.897685 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gfpwp" event={"ID":"4fe1be78-8453-460d-abc1-7c4b89923fe5","Type":"ContainerDied","Data":"ff694ebea0c81a7a44e59e50002da9f6ffad8c591b0501771edc85797fb1f14e"} Nov 25 11:50:57 crc kubenswrapper[4706]: I1125 11:50:57.937971 4706 generic.go:334] "Generic (PLEG): container finished" podID="f050e9f9-24f9-4833-a272-b246b5ceccce" containerID="0b8794f06a932013c3a38945f9182abef519de5642200f1080f1e8a5359a03b2" exitCode=0 Nov 25 11:50:57 crc kubenswrapper[4706]: I1125 11:50:57.940101 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxcn8" event={"ID":"f050e9f9-24f9-4833-a272-b246b5ceccce","Type":"ContainerDied","Data":"0b8794f06a932013c3a38945f9182abef519de5642200f1080f1e8a5359a03b2"} Nov 25 11:50:58 crc kubenswrapper[4706]: I1125 11:50:58.281990 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vxcn8" Nov 25 11:50:58 crc kubenswrapper[4706]: I1125 11:50:58.328583 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f050e9f9-24f9-4833-a272-b246b5ceccce-catalog-content\") pod \"f050e9f9-24f9-4833-a272-b246b5ceccce\" (UID: \"f050e9f9-24f9-4833-a272-b246b5ceccce\") " Nov 25 11:50:58 crc kubenswrapper[4706]: I1125 11:50:58.328704 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f050e9f9-24f9-4833-a272-b246b5ceccce-utilities\") pod \"f050e9f9-24f9-4833-a272-b246b5ceccce\" (UID: \"f050e9f9-24f9-4833-a272-b246b5ceccce\") " Nov 25 11:50:58 crc kubenswrapper[4706]: I1125 11:50:58.328744 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk48q\" (UniqueName: \"kubernetes.io/projected/f050e9f9-24f9-4833-a272-b246b5ceccce-kube-api-access-dk48q\") pod \"f050e9f9-24f9-4833-a272-b246b5ceccce\" (UID: \"f050e9f9-24f9-4833-a272-b246b5ceccce\") " Nov 25 11:50:58 crc kubenswrapper[4706]: I1125 11:50:58.329863 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f050e9f9-24f9-4833-a272-b246b5ceccce-utilities" (OuterVolumeSpecName: "utilities") pod "f050e9f9-24f9-4833-a272-b246b5ceccce" (UID: "f050e9f9-24f9-4833-a272-b246b5ceccce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:50:58 crc kubenswrapper[4706]: I1125 11:50:58.335795 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f050e9f9-24f9-4833-a272-b246b5ceccce-kube-api-access-dk48q" (OuterVolumeSpecName: "kube-api-access-dk48q") pod "f050e9f9-24f9-4833-a272-b246b5ceccce" (UID: "f050e9f9-24f9-4833-a272-b246b5ceccce"). InnerVolumeSpecName "kube-api-access-dk48q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:50:58 crc kubenswrapper[4706]: I1125 11:50:58.347231 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f050e9f9-24f9-4833-a272-b246b5ceccce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f050e9f9-24f9-4833-a272-b246b5ceccce" (UID: "f050e9f9-24f9-4833-a272-b246b5ceccce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:50:58 crc kubenswrapper[4706]: I1125 11:50:58.430695 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk48q\" (UniqueName: \"kubernetes.io/projected/f050e9f9-24f9-4833-a272-b246b5ceccce-kube-api-access-dk48q\") on node \"crc\" DevicePath \"\"" Nov 25 11:50:58 crc kubenswrapper[4706]: I1125 11:50:58.430742 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f050e9f9-24f9-4833-a272-b246b5ceccce-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 11:50:58 crc kubenswrapper[4706]: I1125 11:50:58.430754 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f050e9f9-24f9-4833-a272-b246b5ceccce-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 11:50:58 crc kubenswrapper[4706]: I1125 11:50:58.947937 4706 generic.go:334] "Generic (PLEG): container finished" podID="4fe1be78-8453-460d-abc1-7c4b89923fe5" containerID="683ddada2c3d9c74736e476533727dec03f1308d443065b633d83a32829838f2" exitCode=0 Nov 25 11:50:58 crc kubenswrapper[4706]: I1125 11:50:58.948032 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gfpwp" event={"ID":"4fe1be78-8453-460d-abc1-7c4b89923fe5","Type":"ContainerDied","Data":"683ddada2c3d9c74736e476533727dec03f1308d443065b633d83a32829838f2"} Nov 25 11:50:58 crc kubenswrapper[4706]: I1125 11:50:58.950353 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxcn8" event={"ID":"f050e9f9-24f9-4833-a272-b246b5ceccce","Type":"ContainerDied","Data":"f6da3f3d1f321107247c22d93eb9b10da5d7347b55f44f0cf9e62fa62eebce24"} Nov 25 11:50:58 crc kubenswrapper[4706]: I1125 11:50:58.950395 4706 scope.go:117] "RemoveContainer" containerID="0b8794f06a932013c3a38945f9182abef519de5642200f1080f1e8a5359a03b2" Nov 25 11:50:58 crc kubenswrapper[4706]: I1125 11:50:58.950480 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vxcn8" Nov 25 11:50:58 crc kubenswrapper[4706]: I1125 11:50:58.986213 4706 scope.go:117] "RemoveContainer" containerID="e3da9ee60ed57adc3fe72b5104617159da854b7975d5f502a1467892abd2ba44" Nov 25 11:50:58 crc kubenswrapper[4706]: I1125 11:50:58.988294 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vxcn8"] Nov 25 11:50:58 crc kubenswrapper[4706]: I1125 11:50:58.995166 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vxcn8"] Nov 25 11:50:59 crc kubenswrapper[4706]: I1125 11:50:59.035404 4706 scope.go:117] "RemoveContainer" containerID="61d04dca5bb321a4990ada2a218a97d89db7986ce0c6b142bb390f6cf1c12d8f" Nov 25 11:50:59 crc kubenswrapper[4706]: I1125 11:50:59.931172 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f050e9f9-24f9-4833-a272-b246b5ceccce" path="/var/lib/kubelet/pods/f050e9f9-24f9-4833-a272-b246b5ceccce/volumes" Nov 25 11:50:59 crc kubenswrapper[4706]: I1125 11:50:59.968468 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gfpwp" event={"ID":"4fe1be78-8453-460d-abc1-7c4b89923fe5","Type":"ContainerStarted","Data":"c825f89b4b50e025d85487ae8bbfe3c6fc6df80ff8e840507da96fab1e35a803"} Nov 25 11:50:59 crc kubenswrapper[4706]: I1125 11:50:59.968521 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gfpwp" event={"ID":"4fe1be78-8453-460d-abc1-7c4b89923fe5","Type":"ContainerStarted","Data":"633a731154e0573bdab9ac307fb63f4d846afb7112e9ad83e528dfd37e939873"} Nov 25 11:50:59 crc kubenswrapper[4706]: I1125 11:50:59.968535 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gfpwp" event={"ID":"4fe1be78-8453-460d-abc1-7c4b89923fe5","Type":"ContainerStarted","Data":"5b1e36d810a7c9f6b23ed8e4c95d9ddcf4b13403323702fae1dd8deb695947ff"} Nov 25 11:50:59 crc kubenswrapper[4706]: I1125 11:50:59.968545 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gfpwp" event={"ID":"4fe1be78-8453-460d-abc1-7c4b89923fe5","Type":"ContainerStarted","Data":"2a28ca693de3543c2473adc2a20e7d686a36101b47658972841e0e9473206960"} Nov 25 11:50:59 crc kubenswrapper[4706]: I1125 11:50:59.968554 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gfpwp" event={"ID":"4fe1be78-8453-460d-abc1-7c4b89923fe5","Type":"ContainerStarted","Data":"65cc674bd9300f42c5479c02e8e52c20f64eff95d45b8a4480dc980f2dda90c7"} Nov 25 11:51:00 crc kubenswrapper[4706]: I1125 11:51:00.981198 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gfpwp" event={"ID":"4fe1be78-8453-460d-abc1-7c4b89923fe5","Type":"ContainerStarted","Data":"2de3efe356038847c03cada155b8d1ddb29e1017d85ee50cba3a3b16cfc8bdd9"} Nov 25 11:51:00 crc kubenswrapper[4706]: I1125 11:51:00.981635 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-gfpwp" Nov 25 11:51:01 crc kubenswrapper[4706]: I1125 11:51:01.007199 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-gfpwp" podStartSLOduration=6.493035387 podStartE2EDuration="14.007174071s" podCreationTimestamp="2025-11-25 11:50:47 +0000 UTC" firstStartedPulling="2025-11-25 11:50:48.561903751 +0000 UTC m=+857.476461132" lastFinishedPulling="2025-11-25 11:50:56.076042435 +0000 UTC m=+864.990599816" observedRunningTime="2025-11-25 11:51:01.004397562 +0000 UTC m=+869.918954953" watchObservedRunningTime="2025-11-25 11:51:01.007174071 +0000 UTC m=+869.921731452" Nov 25 11:51:03 crc kubenswrapper[4706]: I1125 11:51:03.373000 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-gfpwp" Nov 25 11:51:03 crc kubenswrapper[4706]: I1125 11:51:03.415202 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-gfpwp" Nov 25 11:51:08 crc kubenswrapper[4706]: I1125 11:51:08.393588 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-9gk5w" Nov 25 11:51:08 crc kubenswrapper[4706]: I1125 11:51:08.525407 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-5gnwd" Nov 25 11:51:09 crc kubenswrapper[4706]: I1125 11:51:09.985673 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-2w52p" Nov 25 11:51:13 crc kubenswrapper[4706]: I1125 11:51:13.061644 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-kmq72"] Nov 25 11:51:13 crc kubenswrapper[4706]: E1125 11:51:13.062759 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b88b3a8-9948-44ff-980e-3775fe2b490a" containerName="registry-server" Nov 25 11:51:13 crc kubenswrapper[4706]: I1125 11:51:13.062776 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b88b3a8-9948-44ff-980e-3775fe2b490a" containerName="registry-server" Nov 25 11:51:13 crc kubenswrapper[4706]: E1125 11:51:13.062788 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f050e9f9-24f9-4833-a272-b246b5ceccce" containerName="registry-server" Nov 25 11:51:13 crc kubenswrapper[4706]: I1125 11:51:13.062794 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="f050e9f9-24f9-4833-a272-b246b5ceccce" containerName="registry-server" Nov 25 11:51:13 crc kubenswrapper[4706]: E1125 11:51:13.062810 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b88b3a8-9948-44ff-980e-3775fe2b490a" containerName="extract-content" Nov 25 11:51:13 crc kubenswrapper[4706]: I1125 11:51:13.062816 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b88b3a8-9948-44ff-980e-3775fe2b490a" containerName="extract-content" Nov 25 11:51:13 crc kubenswrapper[4706]: E1125 11:51:13.062826 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f050e9f9-24f9-4833-a272-b246b5ceccce" containerName="extract-content" Nov 25 11:51:13 crc kubenswrapper[4706]: I1125 11:51:13.062833 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="f050e9f9-24f9-4833-a272-b246b5ceccce" containerName="extract-content" Nov 25 11:51:13 crc kubenswrapper[4706]: E1125 11:51:13.062842 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b88b3a8-9948-44ff-980e-3775fe2b490a" containerName="extract-utilities" Nov 25 11:51:13 crc kubenswrapper[4706]: I1125 11:51:13.062847 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b88b3a8-9948-44ff-980e-3775fe2b490a" containerName="extract-utilities" Nov 25 11:51:13 crc kubenswrapper[4706]: E1125 11:51:13.062866 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f050e9f9-24f9-4833-a272-b246b5ceccce" containerName="extract-utilities" Nov 25 11:51:13 crc kubenswrapper[4706]: I1125 11:51:13.062872 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="f050e9f9-24f9-4833-a272-b246b5ceccce" containerName="extract-utilities" Nov 25 11:51:13 crc kubenswrapper[4706]: I1125 11:51:13.062989 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="f050e9f9-24f9-4833-a272-b246b5ceccce" containerName="registry-server" Nov 25 11:51:13 crc kubenswrapper[4706]: I1125 11:51:13.063008 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b88b3a8-9948-44ff-980e-3775fe2b490a" containerName="registry-server" Nov 25 11:51:13 crc kubenswrapper[4706]: I1125 11:51:13.063515 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kmq72" Nov 25 11:51:13 crc kubenswrapper[4706]: I1125 11:51:13.066291 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-lk58c" Nov 25 11:51:13 crc kubenswrapper[4706]: I1125 11:51:13.066753 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 25 11:51:13 crc kubenswrapper[4706]: I1125 11:51:13.067586 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 25 11:51:13 crc kubenswrapper[4706]: I1125 11:51:13.132169 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll8lr\" (UniqueName: \"kubernetes.io/projected/023577f5-dc07-435b-866f-4d30c7e955a3-kube-api-access-ll8lr\") pod \"openstack-operator-index-kmq72\" (UID: \"023577f5-dc07-435b-866f-4d30c7e955a3\") " pod="openstack-operators/openstack-operator-index-kmq72" Nov 25 11:51:13 crc kubenswrapper[4706]: I1125 11:51:13.135405 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kmq72"] Nov 25 11:51:13 crc kubenswrapper[4706]: I1125 11:51:13.232858 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll8lr\" (UniqueName: \"kubernetes.io/projected/023577f5-dc07-435b-866f-4d30c7e955a3-kube-api-access-ll8lr\") pod \"openstack-operator-index-kmq72\" (UID: \"023577f5-dc07-435b-866f-4d30c7e955a3\") " pod="openstack-operators/openstack-operator-index-kmq72" Nov 25 11:51:13 crc kubenswrapper[4706]: I1125 11:51:13.255171 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll8lr\" (UniqueName: \"kubernetes.io/projected/023577f5-dc07-435b-866f-4d30c7e955a3-kube-api-access-ll8lr\") pod \"openstack-operator-index-kmq72\" (UID: \"023577f5-dc07-435b-866f-4d30c7e955a3\") " pod="openstack-operators/openstack-operator-index-kmq72" Nov 25 11:51:13 crc kubenswrapper[4706]: I1125 11:51:13.384065 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kmq72" Nov 25 11:51:13 crc kubenswrapper[4706]: I1125 11:51:13.582419 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kmq72"] Nov 25 11:51:13 crc kubenswrapper[4706]: W1125 11:51:13.588405 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod023577f5_dc07_435b_866f_4d30c7e955a3.slice/crio-1c94825bba31a22dfb4b2d84878700979c7a4c8d0f746ff1ebfec779ebe393e0 WatchSource:0}: Error finding container 1c94825bba31a22dfb4b2d84878700979c7a4c8d0f746ff1ebfec779ebe393e0: Status 404 returned error can't find the container with id 1c94825bba31a22dfb4b2d84878700979c7a4c8d0f746ff1ebfec779ebe393e0 Nov 25 11:51:14 crc kubenswrapper[4706]: I1125 11:51:14.111316 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kmq72" event={"ID":"023577f5-dc07-435b-866f-4d30c7e955a3","Type":"ContainerStarted","Data":"1c94825bba31a22dfb4b2d84878700979c7a4c8d0f746ff1ebfec779ebe393e0"} Nov 25 11:51:15 crc kubenswrapper[4706]: I1125 11:51:15.827819 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-kmq72"] Nov 25 11:51:16 crc kubenswrapper[4706]: I1125 11:51:16.435896 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-g64cw"] Nov 25 11:51:16 crc kubenswrapper[4706]: I1125 11:51:16.436992 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-g64cw" Nov 25 11:51:16 crc kubenswrapper[4706]: I1125 11:51:16.443530 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-g64cw"] Nov 25 11:51:16 crc kubenswrapper[4706]: I1125 11:51:16.573107 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsjrv\" (UniqueName: \"kubernetes.io/projected/fa3da9d1-2214-4436-951b-2f2ec4c05104-kube-api-access-gsjrv\") pod \"openstack-operator-index-g64cw\" (UID: \"fa3da9d1-2214-4436-951b-2f2ec4c05104\") " pod="openstack-operators/openstack-operator-index-g64cw" Nov 25 11:51:16 crc kubenswrapper[4706]: I1125 11:51:16.674449 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsjrv\" (UniqueName: \"kubernetes.io/projected/fa3da9d1-2214-4436-951b-2f2ec4c05104-kube-api-access-gsjrv\") pod \"openstack-operator-index-g64cw\" (UID: \"fa3da9d1-2214-4436-951b-2f2ec4c05104\") " pod="openstack-operators/openstack-operator-index-g64cw" Nov 25 11:51:16 crc kubenswrapper[4706]: I1125 11:51:16.693771 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsjrv\" (UniqueName: \"kubernetes.io/projected/fa3da9d1-2214-4436-951b-2f2ec4c05104-kube-api-access-gsjrv\") pod \"openstack-operator-index-g64cw\" (UID: \"fa3da9d1-2214-4436-951b-2f2ec4c05104\") " pod="openstack-operators/openstack-operator-index-g64cw" Nov 25 11:51:16 crc kubenswrapper[4706]: I1125 11:51:16.764066 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-g64cw" Nov 25 11:51:17 crc kubenswrapper[4706]: I1125 11:51:17.233443 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-g64cw"] Nov 25 11:51:17 crc kubenswrapper[4706]: W1125 11:51:17.239892 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa3da9d1_2214_4436_951b_2f2ec4c05104.slice/crio-09f9a8e34239af8e932f0016628abb5ed4e2ddc0ce0a294e2f1f9f64cb45cbe9 WatchSource:0}: Error finding container 09f9a8e34239af8e932f0016628abb5ed4e2ddc0ce0a294e2f1f9f64cb45cbe9: Status 404 returned error can't find the container with id 09f9a8e34239af8e932f0016628abb5ed4e2ddc0ce0a294e2f1f9f64cb45cbe9 Nov 25 11:51:18 crc kubenswrapper[4706]: I1125 11:51:18.142265 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-g64cw" event={"ID":"fa3da9d1-2214-4436-951b-2f2ec4c05104","Type":"ContainerStarted","Data":"09f9a8e34239af8e932f0016628abb5ed4e2ddc0ce0a294e2f1f9f64cb45cbe9"} Nov 25 11:51:18 crc kubenswrapper[4706]: I1125 11:51:18.378017 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-gfpwp" Nov 25 11:51:25 crc kubenswrapper[4706]: I1125 11:51:25.188755 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kmq72" event={"ID":"023577f5-dc07-435b-866f-4d30c7e955a3","Type":"ContainerStarted","Data":"8726171ad69c1594fa0c08d8762b9492dc42e9e0c05cc76d8f256dc5d156a264"} Nov 25 11:51:25 crc kubenswrapper[4706]: I1125 11:51:25.188871 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-kmq72" podUID="023577f5-dc07-435b-866f-4d30c7e955a3" containerName="registry-server" containerID="cri-o://8726171ad69c1594fa0c08d8762b9492dc42e9e0c05cc76d8f256dc5d156a264" gracePeriod=2 Nov 25 11:51:25 crc kubenswrapper[4706]: I1125 11:51:25.190335 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-g64cw" event={"ID":"fa3da9d1-2214-4436-951b-2f2ec4c05104","Type":"ContainerStarted","Data":"18dc2f63190ee18ed65d47a7003feed53499f19ece0fd704f85dbde1f86a5b6b"} Nov 25 11:51:25 crc kubenswrapper[4706]: I1125 11:51:25.209615 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-kmq72" podStartSLOduration=0.977318038 podStartE2EDuration="12.209591521s" podCreationTimestamp="2025-11-25 11:51:13 +0000 UTC" firstStartedPulling="2025-11-25 11:51:13.593550317 +0000 UTC m=+882.508107698" lastFinishedPulling="2025-11-25 11:51:24.8258238 +0000 UTC m=+893.740381181" observedRunningTime="2025-11-25 11:51:25.205622632 +0000 UTC m=+894.120180003" watchObservedRunningTime="2025-11-25 11:51:25.209591521 +0000 UTC m=+894.124148902" Nov 25 11:51:25 crc kubenswrapper[4706]: I1125 11:51:25.239262 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-g64cw" podStartSLOduration=1.637175038 podStartE2EDuration="9.239242844s" podCreationTimestamp="2025-11-25 11:51:16 +0000 UTC" firstStartedPulling="2025-11-25 11:51:17.242536454 +0000 UTC m=+886.157093835" lastFinishedPulling="2025-11-25 11:51:24.84460424 +0000 UTC m=+893.759161641" observedRunningTime="2025-11-25 11:51:25.221880539 +0000 UTC m=+894.136437920" watchObservedRunningTime="2025-11-25 11:51:25.239242844 +0000 UTC m=+894.153800245" Nov 25 11:51:25 crc kubenswrapper[4706]: I1125 11:51:25.641534 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kmq72" Nov 25 11:51:25 crc kubenswrapper[4706]: I1125 11:51:25.771787 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ll8lr\" (UniqueName: \"kubernetes.io/projected/023577f5-dc07-435b-866f-4d30c7e955a3-kube-api-access-ll8lr\") pod \"023577f5-dc07-435b-866f-4d30c7e955a3\" (UID: \"023577f5-dc07-435b-866f-4d30c7e955a3\") " Nov 25 11:51:25 crc kubenswrapper[4706]: I1125 11:51:25.777322 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/023577f5-dc07-435b-866f-4d30c7e955a3-kube-api-access-ll8lr" (OuterVolumeSpecName: "kube-api-access-ll8lr") pod "023577f5-dc07-435b-866f-4d30c7e955a3" (UID: "023577f5-dc07-435b-866f-4d30c7e955a3"). InnerVolumeSpecName "kube-api-access-ll8lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:51:25 crc kubenswrapper[4706]: I1125 11:51:25.873357 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ll8lr\" (UniqueName: \"kubernetes.io/projected/023577f5-dc07-435b-866f-4d30c7e955a3-kube-api-access-ll8lr\") on node \"crc\" DevicePath \"\"" Nov 25 11:51:26 crc kubenswrapper[4706]: I1125 11:51:26.199886 4706 generic.go:334] "Generic (PLEG): container finished" podID="023577f5-dc07-435b-866f-4d30c7e955a3" containerID="8726171ad69c1594fa0c08d8762b9492dc42e9e0c05cc76d8f256dc5d156a264" exitCode=0 Nov 25 11:51:26 crc kubenswrapper[4706]: I1125 11:51:26.199974 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kmq72" event={"ID":"023577f5-dc07-435b-866f-4d30c7e955a3","Type":"ContainerDied","Data":"8726171ad69c1594fa0c08d8762b9492dc42e9e0c05cc76d8f256dc5d156a264"} Nov 25 11:51:26 crc kubenswrapper[4706]: I1125 11:51:26.200030 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kmq72" Nov 25 11:51:26 crc kubenswrapper[4706]: I1125 11:51:26.200060 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kmq72" event={"ID":"023577f5-dc07-435b-866f-4d30c7e955a3","Type":"ContainerDied","Data":"1c94825bba31a22dfb4b2d84878700979c7a4c8d0f746ff1ebfec779ebe393e0"} Nov 25 11:51:26 crc kubenswrapper[4706]: I1125 11:51:26.200083 4706 scope.go:117] "RemoveContainer" containerID="8726171ad69c1594fa0c08d8762b9492dc42e9e0c05cc76d8f256dc5d156a264" Nov 25 11:51:26 crc kubenswrapper[4706]: I1125 11:51:26.222053 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-kmq72"] Nov 25 11:51:26 crc kubenswrapper[4706]: I1125 11:51:26.226795 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-kmq72"] Nov 25 11:51:26 crc kubenswrapper[4706]: I1125 11:51:26.227991 4706 scope.go:117] "RemoveContainer" containerID="8726171ad69c1594fa0c08d8762b9492dc42e9e0c05cc76d8f256dc5d156a264" Nov 25 11:51:26 crc kubenswrapper[4706]: E1125 11:51:26.228613 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8726171ad69c1594fa0c08d8762b9492dc42e9e0c05cc76d8f256dc5d156a264\": container with ID starting with 8726171ad69c1594fa0c08d8762b9492dc42e9e0c05cc76d8f256dc5d156a264 not found: ID does not exist" containerID="8726171ad69c1594fa0c08d8762b9492dc42e9e0c05cc76d8f256dc5d156a264" Nov 25 11:51:26 crc kubenswrapper[4706]: I1125 11:51:26.228660 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8726171ad69c1594fa0c08d8762b9492dc42e9e0c05cc76d8f256dc5d156a264"} err="failed to get container status \"8726171ad69c1594fa0c08d8762b9492dc42e9e0c05cc76d8f256dc5d156a264\": rpc error: code = NotFound desc = could not find container \"8726171ad69c1594fa0c08d8762b9492dc42e9e0c05cc76d8f256dc5d156a264\": container with ID starting with 8726171ad69c1594fa0c08d8762b9492dc42e9e0c05cc76d8f256dc5d156a264 not found: ID does not exist" Nov 25 11:51:26 crc kubenswrapper[4706]: I1125 11:51:26.765289 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-g64cw" Nov 25 11:51:26 crc kubenswrapper[4706]: I1125 11:51:26.765409 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-g64cw" Nov 25 11:51:26 crc kubenswrapper[4706]: I1125 11:51:26.805493 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-g64cw" Nov 25 11:51:27 crc kubenswrapper[4706]: I1125 11:51:27.942420 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="023577f5-dc07-435b-866f-4d30c7e955a3" path="/var/lib/kubelet/pods/023577f5-dc07-435b-866f-4d30c7e955a3/volumes" Nov 25 11:51:36 crc kubenswrapper[4706]: I1125 11:51:36.796292 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-g64cw" Nov 25 11:51:41 crc kubenswrapper[4706]: I1125 11:51:41.476191 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv"] Nov 25 11:51:41 crc kubenswrapper[4706]: E1125 11:51:41.477219 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="023577f5-dc07-435b-866f-4d30c7e955a3" containerName="registry-server" Nov 25 11:51:41 crc kubenswrapper[4706]: I1125 11:51:41.477239 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="023577f5-dc07-435b-866f-4d30c7e955a3" containerName="registry-server" Nov 25 11:51:41 crc kubenswrapper[4706]: I1125 11:51:41.477412 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="023577f5-dc07-435b-866f-4d30c7e955a3" containerName="registry-server" Nov 25 11:51:41 crc kubenswrapper[4706]: I1125 11:51:41.478504 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv" Nov 25 11:51:41 crc kubenswrapper[4706]: I1125 11:51:41.480151 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-v95s5" Nov 25 11:51:41 crc kubenswrapper[4706]: I1125 11:51:41.484725 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv"] Nov 25 11:51:41 crc kubenswrapper[4706]: I1125 11:51:41.601013 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/787337fb-0b33-488b-a1b5-c680273f2c5b-bundle\") pod \"6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv\" (UID: \"787337fb-0b33-488b-a1b5-c680273f2c5b\") " pod="openstack-operators/6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv" Nov 25 11:51:41 crc kubenswrapper[4706]: I1125 11:51:41.601128 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/787337fb-0b33-488b-a1b5-c680273f2c5b-util\") pod \"6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv\" (UID: \"787337fb-0b33-488b-a1b5-c680273f2c5b\") " pod="openstack-operators/6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv" Nov 25 11:51:41 crc kubenswrapper[4706]: I1125 11:51:41.601158 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjzmr\" (UniqueName: \"kubernetes.io/projected/787337fb-0b33-488b-a1b5-c680273f2c5b-kube-api-access-bjzmr\") pod \"6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv\" (UID: \"787337fb-0b33-488b-a1b5-c680273f2c5b\") " pod="openstack-operators/6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv" Nov 25 11:51:41 crc kubenswrapper[4706]: I1125 11:51:41.702739 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/787337fb-0b33-488b-a1b5-c680273f2c5b-bundle\") pod \"6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv\" (UID: \"787337fb-0b33-488b-a1b5-c680273f2c5b\") " pod="openstack-operators/6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv" Nov 25 11:51:41 crc kubenswrapper[4706]: I1125 11:51:41.702854 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/787337fb-0b33-488b-a1b5-c680273f2c5b-util\") pod \"6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv\" (UID: \"787337fb-0b33-488b-a1b5-c680273f2c5b\") " pod="openstack-operators/6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv" Nov 25 11:51:41 crc kubenswrapper[4706]: I1125 11:51:41.702876 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjzmr\" (UniqueName: \"kubernetes.io/projected/787337fb-0b33-488b-a1b5-c680273f2c5b-kube-api-access-bjzmr\") pod \"6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv\" (UID: \"787337fb-0b33-488b-a1b5-c680273f2c5b\") " pod="openstack-operators/6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv" Nov 25 11:51:41 crc kubenswrapper[4706]: I1125 11:51:41.703374 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/787337fb-0b33-488b-a1b5-c680273f2c5b-bundle\") pod \"6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv\" (UID: \"787337fb-0b33-488b-a1b5-c680273f2c5b\") " pod="openstack-operators/6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv" Nov 25 11:51:41 crc kubenswrapper[4706]: I1125 11:51:41.703729 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/787337fb-0b33-488b-a1b5-c680273f2c5b-util\") pod \"6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv\" (UID: \"787337fb-0b33-488b-a1b5-c680273f2c5b\") " pod="openstack-operators/6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv" Nov 25 11:51:41 crc kubenswrapper[4706]: I1125 11:51:41.726248 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjzmr\" (UniqueName: \"kubernetes.io/projected/787337fb-0b33-488b-a1b5-c680273f2c5b-kube-api-access-bjzmr\") pod \"6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv\" (UID: \"787337fb-0b33-488b-a1b5-c680273f2c5b\") " pod="openstack-operators/6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv" Nov 25 11:51:41 crc kubenswrapper[4706]: I1125 11:51:41.799271 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv" Nov 25 11:51:42 crc kubenswrapper[4706]: I1125 11:51:42.018698 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv"] Nov 25 11:51:42 crc kubenswrapper[4706]: W1125 11:51:42.025126 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod787337fb_0b33_488b_a1b5_c680273f2c5b.slice/crio-08b7640c5756fcc98125ab166ba4debb9bf63379dcaab3526a18f2af8fc31064 WatchSource:0}: Error finding container 08b7640c5756fcc98125ab166ba4debb9bf63379dcaab3526a18f2af8fc31064: Status 404 returned error can't find the container with id 08b7640c5756fcc98125ab166ba4debb9bf63379dcaab3526a18f2af8fc31064 Nov 25 11:51:42 crc kubenswrapper[4706]: I1125 11:51:42.310973 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv" event={"ID":"787337fb-0b33-488b-a1b5-c680273f2c5b","Type":"ContainerStarted","Data":"08b7640c5756fcc98125ab166ba4debb9bf63379dcaab3526a18f2af8fc31064"} Nov 25 11:51:43 crc kubenswrapper[4706]: I1125 11:51:43.319024 4706 generic.go:334] "Generic (PLEG): container finished" podID="787337fb-0b33-488b-a1b5-c680273f2c5b" containerID="8ff0de69321e8f6e59642bd44e5a31774494bc1fc337f376deac2606011fe734" exitCode=0 Nov 25 11:51:43 crc kubenswrapper[4706]: I1125 11:51:43.319091 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv" event={"ID":"787337fb-0b33-488b-a1b5-c680273f2c5b","Type":"ContainerDied","Data":"8ff0de69321e8f6e59642bd44e5a31774494bc1fc337f376deac2606011fe734"} Nov 25 11:51:44 crc kubenswrapper[4706]: I1125 11:51:44.329043 4706 generic.go:334] "Generic (PLEG): container finished" podID="787337fb-0b33-488b-a1b5-c680273f2c5b" containerID="c1a125e9194bb09e6f421edb9deef2d0a8dbc9cd41f8618884d57902585f8da9" exitCode=0 Nov 25 11:51:44 crc kubenswrapper[4706]: I1125 11:51:44.329151 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv" event={"ID":"787337fb-0b33-488b-a1b5-c680273f2c5b","Type":"ContainerDied","Data":"c1a125e9194bb09e6f421edb9deef2d0a8dbc9cd41f8618884d57902585f8da9"} Nov 25 11:51:45 crc kubenswrapper[4706]: I1125 11:51:45.344984 4706 generic.go:334] "Generic (PLEG): container finished" podID="787337fb-0b33-488b-a1b5-c680273f2c5b" containerID="15e725dd77f652e4d2aff8267b60a3c753be81d64e21c92999bc09947cd8db5e" exitCode=0 Nov 25 11:51:45 crc kubenswrapper[4706]: I1125 11:51:45.345190 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv" event={"ID":"787337fb-0b33-488b-a1b5-c680273f2c5b","Type":"ContainerDied","Data":"15e725dd77f652e4d2aff8267b60a3c753be81d64e21c92999bc09947cd8db5e"} Nov 25 11:51:46 crc kubenswrapper[4706]: I1125 11:51:46.618960 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv" Nov 25 11:51:46 crc kubenswrapper[4706]: I1125 11:51:46.802186 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjzmr\" (UniqueName: \"kubernetes.io/projected/787337fb-0b33-488b-a1b5-c680273f2c5b-kube-api-access-bjzmr\") pod \"787337fb-0b33-488b-a1b5-c680273f2c5b\" (UID: \"787337fb-0b33-488b-a1b5-c680273f2c5b\") " Nov 25 11:51:46 crc kubenswrapper[4706]: I1125 11:51:46.802366 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/787337fb-0b33-488b-a1b5-c680273f2c5b-bundle\") pod \"787337fb-0b33-488b-a1b5-c680273f2c5b\" (UID: \"787337fb-0b33-488b-a1b5-c680273f2c5b\") " Nov 25 11:51:46 crc kubenswrapper[4706]: I1125 11:51:46.802439 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/787337fb-0b33-488b-a1b5-c680273f2c5b-util\") pod \"787337fb-0b33-488b-a1b5-c680273f2c5b\" (UID: \"787337fb-0b33-488b-a1b5-c680273f2c5b\") " Nov 25 11:51:46 crc kubenswrapper[4706]: I1125 11:51:46.803032 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/787337fb-0b33-488b-a1b5-c680273f2c5b-bundle" (OuterVolumeSpecName: "bundle") pod "787337fb-0b33-488b-a1b5-c680273f2c5b" (UID: "787337fb-0b33-488b-a1b5-c680273f2c5b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:51:46 crc kubenswrapper[4706]: I1125 11:51:46.811547 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/787337fb-0b33-488b-a1b5-c680273f2c5b-kube-api-access-bjzmr" (OuterVolumeSpecName: "kube-api-access-bjzmr") pod "787337fb-0b33-488b-a1b5-c680273f2c5b" (UID: "787337fb-0b33-488b-a1b5-c680273f2c5b"). InnerVolumeSpecName "kube-api-access-bjzmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:51:46 crc kubenswrapper[4706]: I1125 11:51:46.815695 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/787337fb-0b33-488b-a1b5-c680273f2c5b-util" (OuterVolumeSpecName: "util") pod "787337fb-0b33-488b-a1b5-c680273f2c5b" (UID: "787337fb-0b33-488b-a1b5-c680273f2c5b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:51:46 crc kubenswrapper[4706]: I1125 11:51:46.904164 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjzmr\" (UniqueName: \"kubernetes.io/projected/787337fb-0b33-488b-a1b5-c680273f2c5b-kube-api-access-bjzmr\") on node \"crc\" DevicePath \"\"" Nov 25 11:51:46 crc kubenswrapper[4706]: I1125 11:51:46.904237 4706 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/787337fb-0b33-488b-a1b5-c680273f2c5b-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:51:46 crc kubenswrapper[4706]: I1125 11:51:46.904250 4706 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/787337fb-0b33-488b-a1b5-c680273f2c5b-util\") on node \"crc\" DevicePath \"\"" Nov 25 11:51:47 crc kubenswrapper[4706]: I1125 11:51:47.360245 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv" event={"ID":"787337fb-0b33-488b-a1b5-c680273f2c5b","Type":"ContainerDied","Data":"08b7640c5756fcc98125ab166ba4debb9bf63379dcaab3526a18f2af8fc31064"} Nov 25 11:51:47 crc kubenswrapper[4706]: I1125 11:51:47.360312 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08b7640c5756fcc98125ab166ba4debb9bf63379dcaab3526a18f2af8fc31064" Nov 25 11:51:47 crc kubenswrapper[4706]: I1125 11:51:47.360434 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv" Nov 25 11:51:52 crc kubenswrapper[4706]: I1125 11:51:52.765842 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5789f9b844-cfvkd"] Nov 25 11:51:52 crc kubenswrapper[4706]: E1125 11:51:52.768171 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="787337fb-0b33-488b-a1b5-c680273f2c5b" containerName="extract" Nov 25 11:51:52 crc kubenswrapper[4706]: I1125 11:51:52.768262 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="787337fb-0b33-488b-a1b5-c680273f2c5b" containerName="extract" Nov 25 11:51:52 crc kubenswrapper[4706]: E1125 11:51:52.768353 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="787337fb-0b33-488b-a1b5-c680273f2c5b" containerName="util" Nov 25 11:51:52 crc kubenswrapper[4706]: I1125 11:51:52.768440 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="787337fb-0b33-488b-a1b5-c680273f2c5b" containerName="util" Nov 25 11:51:52 crc kubenswrapper[4706]: E1125 11:51:52.768512 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="787337fb-0b33-488b-a1b5-c680273f2c5b" containerName="pull" Nov 25 11:51:52 crc kubenswrapper[4706]: I1125 11:51:52.768571 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="787337fb-0b33-488b-a1b5-c680273f2c5b" containerName="pull" Nov 25 11:51:52 crc kubenswrapper[4706]: I1125 11:51:52.768792 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="787337fb-0b33-488b-a1b5-c680273f2c5b" containerName="extract" Nov 25 11:51:52 crc kubenswrapper[4706]: I1125 11:51:52.769467 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-5789f9b844-cfvkd" Nov 25 11:51:52 crc kubenswrapper[4706]: I1125 11:51:52.776837 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-v79vn" Nov 25 11:51:52 crc kubenswrapper[4706]: I1125 11:51:52.840922 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5789f9b844-cfvkd"] Nov 25 11:51:52 crc kubenswrapper[4706]: I1125 11:51:52.894098 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx8j5\" (UniqueName: \"kubernetes.io/projected/2df5f121-0564-4647-acf6-d09283ff5a94-kube-api-access-mx8j5\") pod \"openstack-operator-controller-operator-5789f9b844-cfvkd\" (UID: \"2df5f121-0564-4647-acf6-d09283ff5a94\") " pod="openstack-operators/openstack-operator-controller-operator-5789f9b844-cfvkd" Nov 25 11:51:52 crc kubenswrapper[4706]: I1125 11:51:52.994984 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx8j5\" (UniqueName: \"kubernetes.io/projected/2df5f121-0564-4647-acf6-d09283ff5a94-kube-api-access-mx8j5\") pod \"openstack-operator-controller-operator-5789f9b844-cfvkd\" (UID: \"2df5f121-0564-4647-acf6-d09283ff5a94\") " pod="openstack-operators/openstack-operator-controller-operator-5789f9b844-cfvkd" Nov 25 11:51:53 crc kubenswrapper[4706]: I1125 11:51:53.033523 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx8j5\" (UniqueName: \"kubernetes.io/projected/2df5f121-0564-4647-acf6-d09283ff5a94-kube-api-access-mx8j5\") pod \"openstack-operator-controller-operator-5789f9b844-cfvkd\" (UID: \"2df5f121-0564-4647-acf6-d09283ff5a94\") " pod="openstack-operators/openstack-operator-controller-operator-5789f9b844-cfvkd" Nov 25 11:51:53 crc kubenswrapper[4706]: I1125 11:51:53.088547 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-5789f9b844-cfvkd" Nov 25 11:51:53 crc kubenswrapper[4706]: I1125 11:51:53.434644 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5789f9b844-cfvkd"] Nov 25 11:51:54 crc kubenswrapper[4706]: I1125 11:51:54.411211 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5789f9b844-cfvkd" event={"ID":"2df5f121-0564-4647-acf6-d09283ff5a94","Type":"ContainerStarted","Data":"ff67ed88dca9619289aa8e935ebab5ff3a1fd87fc520dc3de67765cb1b3f8a4c"} Nov 25 11:52:01 crc kubenswrapper[4706]: I1125 11:52:01.125397 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 11:52:01 crc kubenswrapper[4706]: I1125 11:52:01.126168 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 11:52:03 crc kubenswrapper[4706]: I1125 11:52:03.473520 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5789f9b844-cfvkd" event={"ID":"2df5f121-0564-4647-acf6-d09283ff5a94","Type":"ContainerStarted","Data":"e44190ab5cbcff354325f815ddbbd371958307bff570571987c74582e97363b1"} Nov 25 11:52:03 crc kubenswrapper[4706]: I1125 11:52:03.474854 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-5789f9b844-cfvkd" Nov 25 11:52:03 crc kubenswrapper[4706]: I1125 11:52:03.510436 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-5789f9b844-cfvkd" podStartSLOduration=2.703086511 podStartE2EDuration="11.510412574s" podCreationTimestamp="2025-11-25 11:51:52 +0000 UTC" firstStartedPulling="2025-11-25 11:51:53.45008638 +0000 UTC m=+922.364643761" lastFinishedPulling="2025-11-25 11:52:02.257412443 +0000 UTC m=+931.171969824" observedRunningTime="2025-11-25 11:52:03.506058725 +0000 UTC m=+932.420616106" watchObservedRunningTime="2025-11-25 11:52:03.510412574 +0000 UTC m=+932.424969955" Nov 25 11:52:13 crc kubenswrapper[4706]: I1125 11:52:13.093791 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-5789f9b844-cfvkd" Nov 25 11:52:31 crc kubenswrapper[4706]: I1125 11:52:31.133955 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 11:52:31 crc kubenswrapper[4706]: I1125 11:52:31.134660 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.332379 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-jh5hc"] Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.334780 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-jh5hc" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.337559 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-lfvgq" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.338293 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-4bsmv"] Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.339105 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-4bsmv" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.340192 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-vdzbk" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.352705 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-hqsp5"] Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.353777 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hqsp5" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.355401 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-8v89s" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.362568 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-jh5hc"] Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.367962 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-4bsmv"] Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.386917 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-hqsp5"] Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.392223 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jzpv\" (UniqueName: \"kubernetes.io/projected/ee655c82-6748-4bba-9da4-dcf73e0cff37-kube-api-access-6jzpv\") pod \"cinder-operator-controller-manager-79856dc55c-4bsmv\" (UID: \"ee655c82-6748-4bba-9da4-dcf73e0cff37\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-4bsmv" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.392320 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gqwh\" (UniqueName: \"kubernetes.io/projected/23155e14-a775-48c5-adf9-55dcfd008040-kube-api-access-5gqwh\") pod \"barbican-operator-controller-manager-86dc4d89c8-jh5hc\" (UID: \"23155e14-a775-48c5-adf9-55dcfd008040\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-jh5hc" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.392362 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4xgt\" (UniqueName: \"kubernetes.io/projected/9fa65252-7bf5-4e83-beb7-dfcfa63db10d-kube-api-access-r4xgt\") pod \"designate-operator-controller-manager-7d695c9b56-hqsp5\" (UID: \"9fa65252-7bf5-4e83-beb7-dfcfa63db10d\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hqsp5" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.400239 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-t6c78"] Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.401360 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-t6c78" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.410522 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-lmg22" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.417751 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-9bz4f"] Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.419278 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-9bz4f" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.425819 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-wdhpk" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.463490 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-t6c78"] Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.484810 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-zx4v6"] Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.487354 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zx4v6" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.495677 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gqwh\" (UniqueName: \"kubernetes.io/projected/23155e14-a775-48c5-adf9-55dcfd008040-kube-api-access-5gqwh\") pod \"barbican-operator-controller-manager-86dc4d89c8-jh5hc\" (UID: \"23155e14-a775-48c5-adf9-55dcfd008040\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-jh5hc" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.495776 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4xgt\" (UniqueName: \"kubernetes.io/projected/9fa65252-7bf5-4e83-beb7-dfcfa63db10d-kube-api-access-r4xgt\") pod \"designate-operator-controller-manager-7d695c9b56-hqsp5\" (UID: \"9fa65252-7bf5-4e83-beb7-dfcfa63db10d\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hqsp5" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.495890 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jzpv\" (UniqueName: \"kubernetes.io/projected/ee655c82-6748-4bba-9da4-dcf73e0cff37-kube-api-access-6jzpv\") pod \"cinder-operator-controller-manager-79856dc55c-4bsmv\" (UID: \"ee655c82-6748-4bba-9da4-dcf73e0cff37\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-4bsmv" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.506814 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-9bz4f"] Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.511483 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-qvhvr" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.567180 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-zx4v6"] Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.575115 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gqwh\" (UniqueName: \"kubernetes.io/projected/23155e14-a775-48c5-adf9-55dcfd008040-kube-api-access-5gqwh\") pod \"barbican-operator-controller-manager-86dc4d89c8-jh5hc\" (UID: \"23155e14-a775-48c5-adf9-55dcfd008040\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-jh5hc" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.577459 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4xgt\" (UniqueName: \"kubernetes.io/projected/9fa65252-7bf5-4e83-beb7-dfcfa63db10d-kube-api-access-r4xgt\") pod \"designate-operator-controller-manager-7d695c9b56-hqsp5\" (UID: \"9fa65252-7bf5-4e83-beb7-dfcfa63db10d\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hqsp5" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.580110 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-rfz7f"] Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.580258 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jzpv\" (UniqueName: \"kubernetes.io/projected/ee655c82-6748-4bba-9da4-dcf73e0cff37-kube-api-access-6jzpv\") pod \"cinder-operator-controller-manager-79856dc55c-4bsmv\" (UID: \"ee655c82-6748-4bba-9da4-dcf73e0cff37\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-4bsmv" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.581519 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-rfz7f" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.589811 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.589965 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-wf72p" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.599014 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krp7q\" (UniqueName: \"kubernetes.io/projected/4857e509-acac-422c-87e8-2662708da599-kube-api-access-krp7q\") pod \"glance-operator-controller-manager-68b95954c9-t6c78\" (UID: \"4857e509-acac-422c-87e8-2662708da599\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-t6c78" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.599106 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmzb5\" (UniqueName: \"kubernetes.io/projected/c6de3b19-c207-4c00-8350-de810fb1f555-kube-api-access-kmzb5\") pod \"heat-operator-controller-manager-774b86978c-9bz4f\" (UID: \"c6de3b19-c207-4c00-8350-de810fb1f555\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-9bz4f" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.599166 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v27n\" (UniqueName: \"kubernetes.io/projected/72bbe536-121d-47c0-b473-2974b238f271-kube-api-access-2v27n\") pod \"horizon-operator-controller-manager-68c9694994-zx4v6\" (UID: \"72bbe536-121d-47c0-b473-2974b238f271\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zx4v6" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.599262 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-nf6gr"] Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.600553 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-nf6gr" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.611440 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-sklr8" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.620437 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-l4m6r"] Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.621667 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-l4m6r" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.623687 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-7bdcv" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.635824 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-rfz7f"] Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.657806 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-jh5hc" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.669763 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-fslzs"] Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.671152 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-fslzs" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.679867 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-bpcjw"] Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.680891 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-4bsmv" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.681216 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-gnddp" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.682712 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-bpcjw" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.689856 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hqsp5" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.691239 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-ztnhk" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.695811 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-nf6gr"] Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.700829 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-l4m6r"] Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.701544 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v27n\" (UniqueName: \"kubernetes.io/projected/72bbe536-121d-47c0-b473-2974b238f271-kube-api-access-2v27n\") pod \"horizon-operator-controller-manager-68c9694994-zx4v6\" (UID: \"72bbe536-121d-47c0-b473-2974b238f271\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zx4v6" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.701642 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krp7q\" (UniqueName: \"kubernetes.io/projected/4857e509-acac-422c-87e8-2662708da599-kube-api-access-krp7q\") pod \"glance-operator-controller-manager-68b95954c9-t6c78\" (UID: \"4857e509-acac-422c-87e8-2662708da599\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-t6c78" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.701723 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhl5d\" (UniqueName: \"kubernetes.io/projected/6c41fff9-feeb-4311-a7ce-7da3a71b3e9c-kube-api-access-mhl5d\") pod \"keystone-operator-controller-manager-748dc6576f-nf6gr\" (UID: \"6c41fff9-feeb-4311-a7ce-7da3a71b3e9c\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-nf6gr" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.701770 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmzb5\" (UniqueName: \"kubernetes.io/projected/c6de3b19-c207-4c00-8350-de810fb1f555-kube-api-access-kmzb5\") pod \"heat-operator-controller-manager-774b86978c-9bz4f\" (UID: \"c6de3b19-c207-4c00-8350-de810fb1f555\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-9bz4f" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.701805 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctdhj\" (UniqueName: \"kubernetes.io/projected/e204aa88-c108-491e-9a73-2fca5c2ef15c-kube-api-access-ctdhj\") pod \"infra-operator-controller-manager-d5cc86f4b-rfz7f\" (UID: \"e204aa88-c108-491e-9a73-2fca5c2ef15c\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-rfz7f" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.703172 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e204aa88-c108-491e-9a73-2fca5c2ef15c-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-rfz7f\" (UID: \"e204aa88-c108-491e-9a73-2fca5c2ef15c\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-rfz7f" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.710892 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-bpcjw"] Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.718698 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-fslzs"] Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.732623 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tfn29"] Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.733817 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tfn29" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.740529 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-mbbvh" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.745194 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v27n\" (UniqueName: \"kubernetes.io/projected/72bbe536-121d-47c0-b473-2974b238f271-kube-api-access-2v27n\") pod \"horizon-operator-controller-manager-68c9694994-zx4v6\" (UID: \"72bbe536-121d-47c0-b473-2974b238f271\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zx4v6" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.753492 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tfn29"] Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.755376 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krp7q\" (UniqueName: \"kubernetes.io/projected/4857e509-acac-422c-87e8-2662708da599-kube-api-access-krp7q\") pod \"glance-operator-controller-manager-68b95954c9-t6c78\" (UID: \"4857e509-acac-422c-87e8-2662708da599\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-t6c78" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.761847 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmzb5\" (UniqueName: \"kubernetes.io/projected/c6de3b19-c207-4c00-8350-de810fb1f555-kube-api-access-kmzb5\") pod \"heat-operator-controller-manager-774b86978c-9bz4f\" (UID: \"c6de3b19-c207-4c00-8350-de810fb1f555\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-9bz4f" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.764460 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-9bz4f" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.805649 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctdhj\" (UniqueName: \"kubernetes.io/projected/e204aa88-c108-491e-9a73-2fca5c2ef15c-kube-api-access-ctdhj\") pod \"infra-operator-controller-manager-d5cc86f4b-rfz7f\" (UID: \"e204aa88-c108-491e-9a73-2fca5c2ef15c\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-rfz7f" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.806144 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfn5q\" (UniqueName: \"kubernetes.io/projected/62e72e86-38e3-4acc-8aa1-664684f27760-kube-api-access-tfn5q\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-bpcjw\" (UID: \"62e72e86-38e3-4acc-8aa1-664684f27760\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-bpcjw" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.806165 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdwj6\" (UniqueName: \"kubernetes.io/projected/3c582966-ab32-499d-8f1c-95c942dd6bb4-kube-api-access-qdwj6\") pod \"neutron-operator-controller-manager-7c57c8bbc4-tfn29\" (UID: \"3c582966-ab32-499d-8f1c-95c942dd6bb4\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tfn29" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.806183 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvdhw\" (UniqueName: \"kubernetes.io/projected/9e5a3424-dd89-4411-872f-70447506cf73-kube-api-access-wvdhw\") pod \"ironic-operator-controller-manager-5bfcdc958c-l4m6r\" (UID: \"9e5a3424-dd89-4411-872f-70447506cf73\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-l4m6r" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.806205 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e204aa88-c108-491e-9a73-2fca5c2ef15c-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-rfz7f\" (UID: \"e204aa88-c108-491e-9a73-2fca5c2ef15c\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-rfz7f" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.806268 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9zq6\" (UniqueName: \"kubernetes.io/projected/70fa0d16-065a-463f-8198-06a03414a128-kube-api-access-r9zq6\") pod \"manila-operator-controller-manager-58bb8d67cc-fslzs\" (UID: \"70fa0d16-065a-463f-8198-06a03414a128\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-fslzs" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.806311 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhl5d\" (UniqueName: \"kubernetes.io/projected/6c41fff9-feeb-4311-a7ce-7da3a71b3e9c-kube-api-access-mhl5d\") pod \"keystone-operator-controller-manager-748dc6576f-nf6gr\" (UID: \"6c41fff9-feeb-4311-a7ce-7da3a71b3e9c\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-nf6gr" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.813197 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e204aa88-c108-491e-9a73-2fca5c2ef15c-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-rfz7f\" (UID: \"e204aa88-c108-491e-9a73-2fca5c2ef15c\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-rfz7f" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.829270 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctdhj\" (UniqueName: \"kubernetes.io/projected/e204aa88-c108-491e-9a73-2fca5c2ef15c-kube-api-access-ctdhj\") pod \"infra-operator-controller-manager-d5cc86f4b-rfz7f\" (UID: \"e204aa88-c108-491e-9a73-2fca5c2ef15c\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-rfz7f" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.830073 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-f47gl"] Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.831234 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-f47gl" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.836018 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-hzz89" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.842380 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-2tmzq"] Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.843595 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-2tmzq" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.852735 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-kpx5g" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.855483 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhl5d\" (UniqueName: \"kubernetes.io/projected/6c41fff9-feeb-4311-a7ce-7da3a71b3e9c-kube-api-access-mhl5d\") pod \"keystone-operator-controller-manager-748dc6576f-nf6gr\" (UID: \"6c41fff9-feeb-4311-a7ce-7da3a71b3e9c\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-nf6gr" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.861192 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-f47gl"] Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.861636 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zx4v6" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.899388 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-2tmzq"] Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.907969 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9zq6\" (UniqueName: \"kubernetes.io/projected/70fa0d16-065a-463f-8198-06a03414a128-kube-api-access-r9zq6\") pod \"manila-operator-controller-manager-58bb8d67cc-fslzs\" (UID: \"70fa0d16-065a-463f-8198-06a03414a128\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-fslzs" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.908047 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfn5q\" (UniqueName: \"kubernetes.io/projected/62e72e86-38e3-4acc-8aa1-664684f27760-kube-api-access-tfn5q\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-bpcjw\" (UID: \"62e72e86-38e3-4acc-8aa1-664684f27760\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-bpcjw" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.908094 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvdhw\" (UniqueName: \"kubernetes.io/projected/9e5a3424-dd89-4411-872f-70447506cf73-kube-api-access-wvdhw\") pod \"ironic-operator-controller-manager-5bfcdc958c-l4m6r\" (UID: \"9e5a3424-dd89-4411-872f-70447506cf73\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-l4m6r" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.908123 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdwj6\" (UniqueName: \"kubernetes.io/projected/3c582966-ab32-499d-8f1c-95c942dd6bb4-kube-api-access-qdwj6\") pod \"neutron-operator-controller-manager-7c57c8bbc4-tfn29\" (UID: \"3c582966-ab32-499d-8f1c-95c942dd6bb4\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tfn29" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.914479 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-nc6f7"] Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.915797 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-nc6f7" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.919458 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-q2ntn" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.957958 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk"] Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.960762 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.963658 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-zwggv" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.963954 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.964514 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-rfz7f" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.974463 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-nc6f7"] Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.976774 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvdhw\" (UniqueName: \"kubernetes.io/projected/9e5a3424-dd89-4411-872f-70447506cf73-kube-api-access-wvdhw\") pod \"ironic-operator-controller-manager-5bfcdc958c-l4m6r\" (UID: \"9e5a3424-dd89-4411-872f-70447506cf73\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-l4m6r" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.979271 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdwj6\" (UniqueName: \"kubernetes.io/projected/3c582966-ab32-499d-8f1c-95c942dd6bb4-kube-api-access-qdwj6\") pod \"neutron-operator-controller-manager-7c57c8bbc4-tfn29\" (UID: \"3c582966-ab32-499d-8f1c-95c942dd6bb4\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tfn29" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.981098 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk"] Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.982383 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-nf6gr" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.987439 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfn5q\" (UniqueName: \"kubernetes.io/projected/62e72e86-38e3-4acc-8aa1-664684f27760-kube-api-access-tfn5q\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-bpcjw\" (UID: \"62e72e86-38e3-4acc-8aa1-664684f27760\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-bpcjw" Nov 25 11:52:34 crc kubenswrapper[4706]: I1125 11:52:34.996238 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-k7crl"] Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:34.999285 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k7crl" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.004192 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-sswwc" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.004424 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9zq6\" (UniqueName: \"kubernetes.io/projected/70fa0d16-065a-463f-8198-06a03414a128-kube-api-access-r9zq6\") pod \"manila-operator-controller-manager-58bb8d67cc-fslzs\" (UID: \"70fa0d16-065a-463f-8198-06a03414a128\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-fslzs" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.010029 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-k7crl"] Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.011407 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h856\" (UniqueName: \"kubernetes.io/projected/063b2f44-faa1-4a58-b77b-f2140f569b01-kube-api-access-8h856\") pod \"octavia-operator-controller-manager-fd75fd47d-2tmzq\" (UID: \"063b2f44-faa1-4a58-b77b-f2140f569b01\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-2tmzq" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.011450 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qf6k\" (UniqueName: \"kubernetes.io/projected/e318ee27-6b61-4c03-b697-782b25461b09-kube-api-access-8qf6k\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk\" (UID: \"e318ee27-6b61-4c03-b697-782b25461b09\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.011481 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcjxh\" (UniqueName: \"kubernetes.io/projected/61b1ec50-3228-43bc-bb09-d74a7f02be52-kube-api-access-qcjxh\") pod \"ovn-operator-controller-manager-66cf5c67ff-nc6f7\" (UID: \"61b1ec50-3228-43bc-bb09-d74a7f02be52\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-nc6f7" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.011525 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7gkw\" (UniqueName: \"kubernetes.io/projected/1c035858-a349-4415-8a5d-f3f2edb7c84e-kube-api-access-p7gkw\") pod \"nova-operator-controller-manager-79556f57fc-f47gl\" (UID: \"1c035858-a349-4415-8a5d-f3f2edb7c84e\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-f47gl" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.011578 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82cq6\" (UniqueName: \"kubernetes.io/projected/eab1279c-c99a-450e-887b-d246a2ff01aa-kube-api-access-82cq6\") pod \"placement-operator-controller-manager-5db546f9d9-k7crl\" (UID: \"eab1279c-c99a-450e-887b-d246a2ff01aa\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k7crl" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.011631 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e318ee27-6b61-4c03-b697-782b25461b09-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk\" (UID: \"e318ee27-6b61-4c03-b697-782b25461b09\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.041616 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-t6c78" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.052746 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-l4m6r" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.088724 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rwbvj"] Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.105211 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rwbvj" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.113525 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e318ee27-6b61-4c03-b697-782b25461b09-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk\" (UID: \"e318ee27-6b61-4c03-b697-782b25461b09\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.113601 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h856\" (UniqueName: \"kubernetes.io/projected/063b2f44-faa1-4a58-b77b-f2140f569b01-kube-api-access-8h856\") pod \"octavia-operator-controller-manager-fd75fd47d-2tmzq\" (UID: \"063b2f44-faa1-4a58-b77b-f2140f569b01\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-2tmzq" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.113669 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qf6k\" (UniqueName: \"kubernetes.io/projected/e318ee27-6b61-4c03-b697-782b25461b09-kube-api-access-8qf6k\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk\" (UID: \"e318ee27-6b61-4c03-b697-782b25461b09\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.113708 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcjxh\" (UniqueName: \"kubernetes.io/projected/61b1ec50-3228-43bc-bb09-d74a7f02be52-kube-api-access-qcjxh\") pod \"ovn-operator-controller-manager-66cf5c67ff-nc6f7\" (UID: \"61b1ec50-3228-43bc-bb09-d74a7f02be52\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-nc6f7" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.113760 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7gkw\" (UniqueName: \"kubernetes.io/projected/1c035858-a349-4415-8a5d-f3f2edb7c84e-kube-api-access-p7gkw\") pod \"nova-operator-controller-manager-79556f57fc-f47gl\" (UID: \"1c035858-a349-4415-8a5d-f3f2edb7c84e\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-f47gl" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.113816 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82cq6\" (UniqueName: \"kubernetes.io/projected/eab1279c-c99a-450e-887b-d246a2ff01aa-kube-api-access-82cq6\") pod \"placement-operator-controller-manager-5db546f9d9-k7crl\" (UID: \"eab1279c-c99a-450e-887b-d246a2ff01aa\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k7crl" Nov 25 11:52:35 crc kubenswrapper[4706]: E1125 11:52:35.117130 4706 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 11:52:35 crc kubenswrapper[4706]: E1125 11:52:35.117213 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e318ee27-6b61-4c03-b697-782b25461b09-cert podName:e318ee27-6b61-4c03-b697-782b25461b09 nodeName:}" failed. No retries permitted until 2025-11-25 11:52:35.617194307 +0000 UTC m=+964.531751688 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e318ee27-6b61-4c03-b697-782b25461b09-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk" (UID: "e318ee27-6b61-4c03-b697-782b25461b09") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.120797 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-zg5d7" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.133840 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-fslzs" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.152677 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcjxh\" (UniqueName: \"kubernetes.io/projected/61b1ec50-3228-43bc-bb09-d74a7f02be52-kube-api-access-qcjxh\") pod \"ovn-operator-controller-manager-66cf5c67ff-nc6f7\" (UID: \"61b1ec50-3228-43bc-bb09-d74a7f02be52\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-nc6f7" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.155849 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qf6k\" (UniqueName: \"kubernetes.io/projected/e318ee27-6b61-4c03-b697-782b25461b09-kube-api-access-8qf6k\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk\" (UID: \"e318ee27-6b61-4c03-b697-782b25461b09\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.157038 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h856\" (UniqueName: \"kubernetes.io/projected/063b2f44-faa1-4a58-b77b-f2140f569b01-kube-api-access-8h856\") pod \"octavia-operator-controller-manager-fd75fd47d-2tmzq\" (UID: \"063b2f44-faa1-4a58-b77b-f2140f569b01\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-2tmzq" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.158162 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7gkw\" (UniqueName: \"kubernetes.io/projected/1c035858-a349-4415-8a5d-f3f2edb7c84e-kube-api-access-p7gkw\") pod \"nova-operator-controller-manager-79556f57fc-f47gl\" (UID: \"1c035858-a349-4415-8a5d-f3f2edb7c84e\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-f47gl" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.160932 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82cq6\" (UniqueName: \"kubernetes.io/projected/eab1279c-c99a-450e-887b-d246a2ff01aa-kube-api-access-82cq6\") pod \"placement-operator-controller-manager-5db546f9d9-k7crl\" (UID: \"eab1279c-c99a-450e-887b-d246a2ff01aa\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k7crl" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.167148 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-bpcjw" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.208786 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tfn29" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.211914 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-8p5t2"] Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.214525 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k7crl" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.216133 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-8p5t2" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.221684 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-sg9ch" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.223404 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rwbvj"] Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.230077 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-8p5t2"] Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.233086 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppvxm\" (UniqueName: \"kubernetes.io/projected/a0668604-b184-4265-b9af-fc6f526d8351-kube-api-access-ppvxm\") pod \"swift-operator-controller-manager-6fdc4fcf86-rwbvj\" (UID: \"a0668604-b184-4265-b9af-fc6f526d8351\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rwbvj" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.237670 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-8rlr7"] Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.243173 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-8rlr7" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.249908 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-cjb6d" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.266679 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-8rlr7"] Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.318751 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-9s7hm"] Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.320037 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-864885998-9s7hm" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.331262 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-sh56x" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.337437 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppvxm\" (UniqueName: \"kubernetes.io/projected/a0668604-b184-4265-b9af-fc6f526d8351-kube-api-access-ppvxm\") pod \"swift-operator-controller-manager-6fdc4fcf86-rwbvj\" (UID: \"a0668604-b184-4265-b9af-fc6f526d8351\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rwbvj" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.337811 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5clp4\" (UniqueName: \"kubernetes.io/projected/a7a52f28-6bc4-481d-8513-16dbb7b37ae1-kube-api-access-5clp4\") pod \"telemetry-operator-controller-manager-567f98c9d-8p5t2\" (UID: \"a7a52f28-6bc4-481d-8513-16dbb7b37ae1\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-8p5t2" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.337927 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-f47gl" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.354131 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-9s7hm"] Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.361564 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppvxm\" (UniqueName: \"kubernetes.io/projected/a0668604-b184-4265-b9af-fc6f526d8351-kube-api-access-ppvxm\") pod \"swift-operator-controller-manager-6fdc4fcf86-rwbvj\" (UID: \"a0668604-b184-4265-b9af-fc6f526d8351\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rwbvj" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.373601 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-2tmzq" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.385393 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-9cb9fb586-5854z"] Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.386645 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-9cb9fb586-5854z" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.389355 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-ljpcz" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.389951 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-nc6f7" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.390535 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.395168 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.402175 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-9cb9fb586-5854z"] Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.407411 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-x9x4q"] Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.413219 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-x9x4q" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.416645 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-ncwsm" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.424215 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-x9x4q"] Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.438966 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1-metrics-certs\") pod \"openstack-operator-controller-manager-9cb9fb586-5854z\" (UID: \"2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1\") " pod="openstack-operators/openstack-operator-controller-manager-9cb9fb586-5854z" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.439029 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smqh6\" (UniqueName: \"kubernetes.io/projected/6b8e15c0-a70f-4b4c-8836-a2c4e7b23f60-kube-api-access-smqh6\") pod \"watcher-operator-controller-manager-864885998-9s7hm\" (UID: \"6b8e15c0-a70f-4b4c-8836-a2c4e7b23f60\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-9s7hm" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.439056 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhvbm\" (UniqueName: \"kubernetes.io/projected/d256078e-afd5-4218-ad5c-d5211eb846a8-kube-api-access-dhvbm\") pod \"test-operator-controller-manager-5cb74df96-8rlr7\" (UID: \"d256078e-afd5-4218-ad5c-d5211eb846a8\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-8rlr7" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.439077 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctj5p\" (UniqueName: \"kubernetes.io/projected/5726a389-32eb-4f0c-938b-6f2ddbb762e7-kube-api-access-ctj5p\") pod \"rabbitmq-cluster-operator-manager-668c99d594-x9x4q\" (UID: \"5726a389-32eb-4f0c-938b-6f2ddbb762e7\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-x9x4q" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.439125 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1-webhook-certs\") pod \"openstack-operator-controller-manager-9cb9fb586-5854z\" (UID: \"2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1\") " pod="openstack-operators/openstack-operator-controller-manager-9cb9fb586-5854z" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.439184 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5clp4\" (UniqueName: \"kubernetes.io/projected/a7a52f28-6bc4-481d-8513-16dbb7b37ae1-kube-api-access-5clp4\") pod \"telemetry-operator-controller-manager-567f98c9d-8p5t2\" (UID: \"a7a52f28-6bc4-481d-8513-16dbb7b37ae1\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-8p5t2" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.439206 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8bp9\" (UniqueName: \"kubernetes.io/projected/2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1-kube-api-access-w8bp9\") pod \"openstack-operator-controller-manager-9cb9fb586-5854z\" (UID: \"2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1\") " pod="openstack-operators/openstack-operator-controller-manager-9cb9fb586-5854z" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.454345 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-4bsmv"] Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.459173 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5clp4\" (UniqueName: \"kubernetes.io/projected/a7a52f28-6bc4-481d-8513-16dbb7b37ae1-kube-api-access-5clp4\") pod \"telemetry-operator-controller-manager-567f98c9d-8p5t2\" (UID: \"a7a52f28-6bc4-481d-8513-16dbb7b37ae1\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-8p5t2" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.540762 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rwbvj" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.541332 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smqh6\" (UniqueName: \"kubernetes.io/projected/6b8e15c0-a70f-4b4c-8836-a2c4e7b23f60-kube-api-access-smqh6\") pod \"watcher-operator-controller-manager-864885998-9s7hm\" (UID: \"6b8e15c0-a70f-4b4c-8836-a2c4e7b23f60\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-9s7hm" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.541374 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhvbm\" (UniqueName: \"kubernetes.io/projected/d256078e-afd5-4218-ad5c-d5211eb846a8-kube-api-access-dhvbm\") pod \"test-operator-controller-manager-5cb74df96-8rlr7\" (UID: \"d256078e-afd5-4218-ad5c-d5211eb846a8\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-8rlr7" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.541391 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctj5p\" (UniqueName: \"kubernetes.io/projected/5726a389-32eb-4f0c-938b-6f2ddbb762e7-kube-api-access-ctj5p\") pod \"rabbitmq-cluster-operator-manager-668c99d594-x9x4q\" (UID: \"5726a389-32eb-4f0c-938b-6f2ddbb762e7\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-x9x4q" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.541429 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1-webhook-certs\") pod \"openstack-operator-controller-manager-9cb9fb586-5854z\" (UID: \"2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1\") " pod="openstack-operators/openstack-operator-controller-manager-9cb9fb586-5854z" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.541494 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8bp9\" (UniqueName: \"kubernetes.io/projected/2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1-kube-api-access-w8bp9\") pod \"openstack-operator-controller-manager-9cb9fb586-5854z\" (UID: \"2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1\") " pod="openstack-operators/openstack-operator-controller-manager-9cb9fb586-5854z" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.541521 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1-metrics-certs\") pod \"openstack-operator-controller-manager-9cb9fb586-5854z\" (UID: \"2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1\") " pod="openstack-operators/openstack-operator-controller-manager-9cb9fb586-5854z" Nov 25 11:52:35 crc kubenswrapper[4706]: E1125 11:52:35.541676 4706 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 25 11:52:35 crc kubenswrapper[4706]: E1125 11:52:35.542797 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1-metrics-certs podName:2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1 nodeName:}" failed. No retries permitted until 2025-11-25 11:52:36.042772596 +0000 UTC m=+964.957329997 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1-metrics-certs") pod "openstack-operator-controller-manager-9cb9fb586-5854z" (UID: "2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1") : secret "metrics-server-cert" not found Nov 25 11:52:35 crc kubenswrapper[4706]: E1125 11:52:35.543002 4706 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 25 11:52:35 crc kubenswrapper[4706]: E1125 11:52:35.543103 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1-webhook-certs podName:2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1 nodeName:}" failed. No retries permitted until 2025-11-25 11:52:36.043093864 +0000 UTC m=+964.957651245 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1-webhook-certs") pod "openstack-operator-controller-manager-9cb9fb586-5854z" (UID: "2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1") : secret "webhook-server-cert" not found Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.567255 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhvbm\" (UniqueName: \"kubernetes.io/projected/d256078e-afd5-4218-ad5c-d5211eb846a8-kube-api-access-dhvbm\") pod \"test-operator-controller-manager-5cb74df96-8rlr7\" (UID: \"d256078e-afd5-4218-ad5c-d5211eb846a8\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-8rlr7" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.574497 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smqh6\" (UniqueName: \"kubernetes.io/projected/6b8e15c0-a70f-4b4c-8836-a2c4e7b23f60-kube-api-access-smqh6\") pod \"watcher-operator-controller-manager-864885998-9s7hm\" (UID: \"6b8e15c0-a70f-4b4c-8836-a2c4e7b23f60\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-9s7hm" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.578356 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8bp9\" (UniqueName: \"kubernetes.io/projected/2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1-kube-api-access-w8bp9\") pod \"openstack-operator-controller-manager-9cb9fb586-5854z\" (UID: \"2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1\") " pod="openstack-operators/openstack-operator-controller-manager-9cb9fb586-5854z" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.586864 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctj5p\" (UniqueName: \"kubernetes.io/projected/5726a389-32eb-4f0c-938b-6f2ddbb762e7-kube-api-access-ctj5p\") pod \"rabbitmq-cluster-operator-manager-668c99d594-x9x4q\" (UID: \"5726a389-32eb-4f0c-938b-6f2ddbb762e7\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-x9x4q" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.591674 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-8p5t2" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.628685 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-8rlr7" Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.643940 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e318ee27-6b61-4c03-b697-782b25461b09-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk\" (UID: \"e318ee27-6b61-4c03-b697-782b25461b09\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk" Nov 25 11:52:35 crc kubenswrapper[4706]: E1125 11:52:35.644153 4706 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 11:52:35 crc kubenswrapper[4706]: E1125 11:52:35.644215 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e318ee27-6b61-4c03-b697-782b25461b09-cert podName:e318ee27-6b61-4c03-b697-782b25461b09 nodeName:}" failed. No retries permitted until 2025-11-25 11:52:36.644197826 +0000 UTC m=+965.558755207 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e318ee27-6b61-4c03-b697-782b25461b09-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk" (UID: "e318ee27-6b61-4c03-b697-782b25461b09") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.645030 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-jh5hc"] Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.712228 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-4bsmv" event={"ID":"ee655c82-6748-4bba-9da4-dcf73e0cff37","Type":"ContainerStarted","Data":"0af0cacb0f0abf55166ef9b5ad72135f09790d82fd1787e21c3eb0d60ede90f4"} Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.714203 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-hqsp5"] Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.754679 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-864885998-9s7hm" Nov 25 11:52:35 crc kubenswrapper[4706]: W1125 11:52:35.759670 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23155e14_a775_48c5_adf9_55dcfd008040.slice/crio-977fcbece86f283db16475b7e0c44b3b1ef56a58fa59a6eb720e30e9af49d78b WatchSource:0}: Error finding container 977fcbece86f283db16475b7e0c44b3b1ef56a58fa59a6eb720e30e9af49d78b: Status 404 returned error can't find the container with id 977fcbece86f283db16475b7e0c44b3b1ef56a58fa59a6eb720e30e9af49d78b Nov 25 11:52:35 crc kubenswrapper[4706]: I1125 11:52:35.794984 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-x9x4q" Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.051443 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1-webhook-certs\") pod \"openstack-operator-controller-manager-9cb9fb586-5854z\" (UID: \"2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1\") " pod="openstack-operators/openstack-operator-controller-manager-9cb9fb586-5854z" Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.051982 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1-metrics-certs\") pod \"openstack-operator-controller-manager-9cb9fb586-5854z\" (UID: \"2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1\") " pod="openstack-operators/openstack-operator-controller-manager-9cb9fb586-5854z" Nov 25 11:52:36 crc kubenswrapper[4706]: E1125 11:52:36.051638 4706 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 25 11:52:36 crc kubenswrapper[4706]: E1125 11:52:36.052309 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1-webhook-certs podName:2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1 nodeName:}" failed. No retries permitted until 2025-11-25 11:52:37.052273196 +0000 UTC m=+965.966830577 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1-webhook-certs") pod "openstack-operator-controller-manager-9cb9fb586-5854z" (UID: "2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1") : secret "webhook-server-cert" not found Nov 25 11:52:36 crc kubenswrapper[4706]: E1125 11:52:36.052378 4706 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 25 11:52:36 crc kubenswrapper[4706]: E1125 11:52:36.052464 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1-metrics-certs podName:2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1 nodeName:}" failed. No retries permitted until 2025-11-25 11:52:37.05244037 +0000 UTC m=+965.966997941 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1-metrics-certs") pod "openstack-operator-controller-manager-9cb9fb586-5854z" (UID: "2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1") : secret "metrics-server-cert" not found Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.207795 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-rfz7f"] Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.228338 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-9bz4f"] Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.237289 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-nf6gr"] Nov 25 11:52:36 crc kubenswrapper[4706]: W1125 11:52:36.239622 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4857e509_acac_422c_87e8_2662708da599.slice/crio-c4b37ed4cbecc4c3a3f4b1de274811de6e320140ee0512363b8ecd6709f17819 WatchSource:0}: Error finding container c4b37ed4cbecc4c3a3f4b1de274811de6e320140ee0512363b8ecd6709f17819: Status 404 returned error can't find the container with id c4b37ed4cbecc4c3a3f4b1de274811de6e320140ee0512363b8ecd6709f17819 Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.251433 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-t6c78"] Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.257019 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-bpcjw"] Nov 25 11:52:36 crc kubenswrapper[4706]: W1125 11:52:36.296509 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e5a3424_dd89_4411_872f_70447506cf73.slice/crio-dcc5e377a1b3449a7de019c829eeef66de3de9919616ceb1a65f1f4966160471 WatchSource:0}: Error finding container dcc5e377a1b3449a7de019c829eeef66de3de9919616ceb1a65f1f4966160471: Status 404 returned error can't find the container with id dcc5e377a1b3449a7de019c829eeef66de3de9919616ceb1a65f1f4966160471 Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.316638 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-l4m6r"] Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.329557 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-fslzs"] Nov 25 11:52:36 crc kubenswrapper[4706]: E1125 11:52:36.334530 4706 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-82cq6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5db546f9d9-k7crl_openstack-operators(eab1279c-c99a-450e-887b-d246a2ff01aa): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 11:52:36 crc kubenswrapper[4706]: E1125 11:52:36.337704 4706 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-82cq6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5db546f9d9-k7crl_openstack-operators(eab1279c-c99a-450e-887b-d246a2ff01aa): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 11:52:36 crc kubenswrapper[4706]: E1125 11:52:36.340513 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k7crl" podUID="eab1279c-c99a-450e-887b-d246a2ff01aa" Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.344182 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-zx4v6"] Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.349352 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-k7crl"] Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.353357 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tfn29"] Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.357935 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-nc6f7"] Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.429121 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rwbvj"] Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.436483 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-2tmzq"] Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.451358 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-8rlr7"] Nov 25 11:52:36 crc kubenswrapper[4706]: W1125 11:52:36.456453 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod063b2f44_faa1_4a58_b77b_f2140f569b01.slice/crio-ba15a9c49e667d1e702e67fefa60beef14bc5a2bffef74a43828c76f2626a122 WatchSource:0}: Error finding container ba15a9c49e667d1e702e67fefa60beef14bc5a2bffef74a43828c76f2626a122: Status 404 returned error can't find the container with id ba15a9c49e667d1e702e67fefa60beef14bc5a2bffef74a43828c76f2626a122 Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.457369 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-9s7hm"] Nov 25 11:52:36 crc kubenswrapper[4706]: E1125 11:52:36.469473 4706 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8h856,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-fd75fd47d-2tmzq_openstack-operators(063b2f44-faa1-4a58-b77b-f2140f569b01): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.475314 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-x9x4q"] Nov 25 11:52:36 crc kubenswrapper[4706]: E1125 11:52:36.475830 4706 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8h856,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-fd75fd47d-2tmzq_openstack-operators(063b2f44-faa1-4a58-b77b-f2140f569b01): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 11:52:36 crc kubenswrapper[4706]: E1125 11:52:36.476015 4706 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dhvbm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cb74df96-8rlr7_openstack-operators(d256078e-afd5-4218-ad5c-d5211eb846a8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 11:52:36 crc kubenswrapper[4706]: E1125 11:52:36.477423 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-2tmzq" podUID="063b2f44-faa1-4a58-b77b-f2140f569b01" Nov 25 11:52:36 crc kubenswrapper[4706]: E1125 11:52:36.478763 4706 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dhvbm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cb74df96-8rlr7_openstack-operators(d256078e-afd5-4218-ad5c-d5211eb846a8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 11:52:36 crc kubenswrapper[4706]: E1125 11:52:36.478932 4706 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-smqh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-864885998-9s7hm_openstack-operators(6b8e15c0-a70f-4b4c-8836-a2c4e7b23f60): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 11:52:36 crc kubenswrapper[4706]: E1125 11:52:36.480584 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5cb74df96-8rlr7" podUID="d256078e-afd5-4218-ad5c-d5211eb846a8" Nov 25 11:52:36 crc kubenswrapper[4706]: E1125 11:52:36.482605 4706 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-smqh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-864885998-9s7hm_openstack-operators(6b8e15c0-a70f-4b4c-8836-a2c4e7b23f60): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.483609 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-8p5t2"] Nov 25 11:52:36 crc kubenswrapper[4706]: E1125 11:52:36.483685 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-864885998-9s7hm" podUID="6b8e15c0-a70f-4b4c-8836-a2c4e7b23f60" Nov 25 11:52:36 crc kubenswrapper[4706]: W1125 11:52:36.485819 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7a52f28_6bc4_481d_8513_16dbb7b37ae1.slice/crio-87b4985744e90be24a9368a92752736529e15a757fa7a90e2d2ee5455e32d2d1 WatchSource:0}: Error finding container 87b4985744e90be24a9368a92752736529e15a757fa7a90e2d2ee5455e32d2d1: Status 404 returned error can't find the container with id 87b4985744e90be24a9368a92752736529e15a757fa7a90e2d2ee5455e32d2d1 Nov 25 11:52:36 crc kubenswrapper[4706]: E1125 11:52:36.491043 4706 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5clp4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-567f98c9d-8p5t2_openstack-operators(a7a52f28-6bc4-481d-8513-16dbb7b37ae1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.491877 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-f47gl"] Nov 25 11:52:36 crc kubenswrapper[4706]: W1125 11:52:36.494507 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c035858_a349_4415_8a5d_f3f2edb7c84e.slice/crio-a386fa66ab96d13e153e1c335ab390ea7d63d6a7fb6c56d79dc520e8adda7812 WatchSource:0}: Error finding container a386fa66ab96d13e153e1c335ab390ea7d63d6a7fb6c56d79dc520e8adda7812: Status 404 returned error can't find the container with id a386fa66ab96d13e153e1c335ab390ea7d63d6a7fb6c56d79dc520e8adda7812 Nov 25 11:52:36 crc kubenswrapper[4706]: E1125 11:52:36.494506 4706 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5clp4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-567f98c9d-8p5t2_openstack-operators(a7a52f28-6bc4-481d-8513-16dbb7b37ae1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 11:52:36 crc kubenswrapper[4706]: E1125 11:52:36.494713 4706 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ctj5p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-x9x4q_openstack-operators(5726a389-32eb-4f0c-938b-6f2ddbb762e7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 11:52:36 crc kubenswrapper[4706]: E1125 11:52:36.495922 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-8p5t2" podUID="a7a52f28-6bc4-481d-8513-16dbb7b37ae1" Nov 25 11:52:36 crc kubenswrapper[4706]: E1125 11:52:36.495977 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-x9x4q" podUID="5726a389-32eb-4f0c-938b-6f2ddbb762e7" Nov 25 11:52:36 crc kubenswrapper[4706]: E1125 11:52:36.498804 4706 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p7gkw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79556f57fc-f47gl_openstack-operators(1c035858-a349-4415-8a5d-f3f2edb7c84e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 11:52:36 crc kubenswrapper[4706]: E1125 11:52:36.502656 4706 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p7gkw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79556f57fc-f47gl_openstack-operators(1c035858-a349-4415-8a5d-f3f2edb7c84e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 11:52:36 crc kubenswrapper[4706]: E1125 11:52:36.503844 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-f47gl" podUID="1c035858-a349-4415-8a5d-f3f2edb7c84e" Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.663969 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e318ee27-6b61-4c03-b697-782b25461b09-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk\" (UID: \"e318ee27-6b61-4c03-b697-782b25461b09\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk" Nov 25 11:52:36 crc kubenswrapper[4706]: E1125 11:52:36.664539 4706 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 11:52:36 crc kubenswrapper[4706]: E1125 11:52:36.664685 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e318ee27-6b61-4c03-b697-782b25461b09-cert podName:e318ee27-6b61-4c03-b697-782b25461b09 nodeName:}" failed. No retries permitted until 2025-11-25 11:52:38.664655301 +0000 UTC m=+967.579212852 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e318ee27-6b61-4c03-b697-782b25461b09-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk" (UID: "e318ee27-6b61-4c03-b697-782b25461b09") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.720269 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-8rlr7" event={"ID":"d256078e-afd5-4218-ad5c-d5211eb846a8","Type":"ContainerStarted","Data":"8207e812103e0230465169b99a1057ee7a93f339ee070712895be65ff7167d2d"} Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.721664 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hqsp5" event={"ID":"9fa65252-7bf5-4e83-beb7-dfcfa63db10d","Type":"ContainerStarted","Data":"f51b2e5d609891f54237aaf363b0bb74f78f3b77f5f9b3218bc92b026bc7f003"} Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.722721 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-2tmzq" event={"ID":"063b2f44-faa1-4a58-b77b-f2140f569b01","Type":"ContainerStarted","Data":"ba15a9c49e667d1e702e67fefa60beef14bc5a2bffef74a43828c76f2626a122"} Nov 25 11:52:36 crc kubenswrapper[4706]: E1125 11:52:36.724887 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-2tmzq" podUID="063b2f44-faa1-4a58-b77b-f2140f569b01" Nov 25 11:52:36 crc kubenswrapper[4706]: E1125 11:52:36.725034 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5cb74df96-8rlr7" podUID="d256078e-afd5-4218-ad5c-d5211eb846a8" Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.728090 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tfn29" event={"ID":"3c582966-ab32-499d-8f1c-95c942dd6bb4","Type":"ContainerStarted","Data":"233c3723c1d26f63b4f5498a7fb0f9a100921489ff795a222c59ab837d63b129"} Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.729708 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-t6c78" event={"ID":"4857e509-acac-422c-87e8-2662708da599","Type":"ContainerStarted","Data":"c4b37ed4cbecc4c3a3f4b1de274811de6e320140ee0512363b8ecd6709f17819"} Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.732103 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-8p5t2" event={"ID":"a7a52f28-6bc4-481d-8513-16dbb7b37ae1","Type":"ContainerStarted","Data":"87b4985744e90be24a9368a92752736529e15a757fa7a90e2d2ee5455e32d2d1"} Nov 25 11:52:36 crc kubenswrapper[4706]: E1125 11:52:36.734116 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-8p5t2" podUID="a7a52f28-6bc4-481d-8513-16dbb7b37ae1" Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.734826 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-jh5hc" event={"ID":"23155e14-a775-48c5-adf9-55dcfd008040","Type":"ContainerStarted","Data":"977fcbece86f283db16475b7e0c44b3b1ef56a58fa59a6eb720e30e9af49d78b"} Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.739865 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-rfz7f" event={"ID":"e204aa88-c108-491e-9a73-2fca5c2ef15c","Type":"ContainerStarted","Data":"9188e7b7c9b453f20ca280ab8ad429f26fc9cc5b2bb806e37bd76f16bad71cd1"} Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.743804 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-bpcjw" event={"ID":"62e72e86-38e3-4acc-8aa1-664684f27760","Type":"ContainerStarted","Data":"07d2d9183b55316beeddef7f062884d0c1667cccc4c9a5b9a1fa19f59103aa51"} Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.745822 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-nf6gr" event={"ID":"6c41fff9-feeb-4311-a7ce-7da3a71b3e9c","Type":"ContainerStarted","Data":"795b98331703cd729ccf1634655a42b7096c2bca8727f4f5430d1d080e8658bb"} Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.748058 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-l4m6r" event={"ID":"9e5a3424-dd89-4411-872f-70447506cf73","Type":"ContainerStarted","Data":"dcc5e377a1b3449a7de019c829eeef66de3de9919616ceb1a65f1f4966160471"} Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.749822 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rwbvj" event={"ID":"a0668604-b184-4265-b9af-fc6f526d8351","Type":"ContainerStarted","Data":"81c9c41d9615eae7d2135482dcc6e666d53922fd3ae6e7e221d15c09b6aec817"} Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.752413 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-9s7hm" event={"ID":"6b8e15c0-a70f-4b4c-8836-a2c4e7b23f60","Type":"ContainerStarted","Data":"9795cdb21451ad29fad2dab3ef07dafc6201fb9d383bd9a900fda63edd3754d3"} Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.754814 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-nc6f7" event={"ID":"61b1ec50-3228-43bc-bb09-d74a7f02be52","Type":"ContainerStarted","Data":"5e00f5b423f76fd96637390d6cccba223d2d298e1e5beee060c947ebcd6cc620"} Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.763474 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-fslzs" event={"ID":"70fa0d16-065a-463f-8198-06a03414a128","Type":"ContainerStarted","Data":"063391d070072261fac9c66bcb08ca92911444a20c7fd9b16295c9c238384627"} Nov 25 11:52:36 crc kubenswrapper[4706]: E1125 11:52:36.764586 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-864885998-9s7hm" podUID="6b8e15c0-a70f-4b4c-8836-a2c4e7b23f60" Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.767625 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k7crl" event={"ID":"eab1279c-c99a-450e-887b-d246a2ff01aa","Type":"ContainerStarted","Data":"2870ab993260ef1bce97b3e558e5e29e20a36a65be681b9e404559e2a266bf9d"} Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.772552 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-9bz4f" event={"ID":"c6de3b19-c207-4c00-8350-de810fb1f555","Type":"ContainerStarted","Data":"ca57b8afaf4c125f920fb3eaa58afe19beb6f98038d816ccecf12f64181aeaaf"} Nov 25 11:52:36 crc kubenswrapper[4706]: E1125 11:52:36.777830 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k7crl" podUID="eab1279c-c99a-450e-887b-d246a2ff01aa" Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.779199 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zx4v6" event={"ID":"72bbe536-121d-47c0-b473-2974b238f271","Type":"ContainerStarted","Data":"b46b154ff637ed6ab5b7271e68ebee6db042f97cb8c23e95b52fdae87d194395"} Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.783110 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-x9x4q" event={"ID":"5726a389-32eb-4f0c-938b-6f2ddbb762e7","Type":"ContainerStarted","Data":"1663bd1155fe950bf0feb1c0b3f13b096d00bcc0f16c91199e600f0747910d52"} Nov 25 11:52:36 crc kubenswrapper[4706]: E1125 11:52:36.784564 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-x9x4q" podUID="5726a389-32eb-4f0c-938b-6f2ddbb762e7" Nov 25 11:52:36 crc kubenswrapper[4706]: I1125 11:52:36.785983 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-f47gl" event={"ID":"1c035858-a349-4415-8a5d-f3f2edb7c84e","Type":"ContainerStarted","Data":"a386fa66ab96d13e153e1c335ab390ea7d63d6a7fb6c56d79dc520e8adda7812"} Nov 25 11:52:36 crc kubenswrapper[4706]: E1125 11:52:36.789472 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-f47gl" podUID="1c035858-a349-4415-8a5d-f3f2edb7c84e" Nov 25 11:52:37 crc kubenswrapper[4706]: I1125 11:52:37.070948 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1-metrics-certs\") pod \"openstack-operator-controller-manager-9cb9fb586-5854z\" (UID: \"2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1\") " pod="openstack-operators/openstack-operator-controller-manager-9cb9fb586-5854z" Nov 25 11:52:37 crc kubenswrapper[4706]: I1125 11:52:37.071612 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1-webhook-certs\") pod \"openstack-operator-controller-manager-9cb9fb586-5854z\" (UID: \"2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1\") " pod="openstack-operators/openstack-operator-controller-manager-9cb9fb586-5854z" Nov 25 11:52:37 crc kubenswrapper[4706]: E1125 11:52:37.072638 4706 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 25 11:52:37 crc kubenswrapper[4706]: E1125 11:52:37.072776 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1-webhook-certs podName:2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1 nodeName:}" failed. No retries permitted until 2025-11-25 11:52:39.072751432 +0000 UTC m=+967.987308813 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1-webhook-certs") pod "openstack-operator-controller-manager-9cb9fb586-5854z" (UID: "2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1") : secret "webhook-server-cert" not found Nov 25 11:52:37 crc kubenswrapper[4706]: I1125 11:52:37.118270 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1-metrics-certs\") pod \"openstack-operator-controller-manager-9cb9fb586-5854z\" (UID: \"2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1\") " pod="openstack-operators/openstack-operator-controller-manager-9cb9fb586-5854z" Nov 25 11:52:37 crc kubenswrapper[4706]: E1125 11:52:37.801995 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-x9x4q" podUID="5726a389-32eb-4f0c-938b-6f2ddbb762e7" Nov 25 11:52:37 crc kubenswrapper[4706]: E1125 11:52:37.804406 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5cb74df96-8rlr7" podUID="d256078e-afd5-4218-ad5c-d5211eb846a8" Nov 25 11:52:37 crc kubenswrapper[4706]: E1125 11:52:37.804442 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-f47gl" podUID="1c035858-a349-4415-8a5d-f3f2edb7c84e" Nov 25 11:52:37 crc kubenswrapper[4706]: E1125 11:52:37.804448 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-864885998-9s7hm" podUID="6b8e15c0-a70f-4b4c-8836-a2c4e7b23f60" Nov 25 11:52:37 crc kubenswrapper[4706]: E1125 11:52:37.804478 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k7crl" podUID="eab1279c-c99a-450e-887b-d246a2ff01aa" Nov 25 11:52:37 crc kubenswrapper[4706]: E1125 11:52:37.804491 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-2tmzq" podUID="063b2f44-faa1-4a58-b77b-f2140f569b01" Nov 25 11:52:37 crc kubenswrapper[4706]: E1125 11:52:37.804559 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-8p5t2" podUID="a7a52f28-6bc4-481d-8513-16dbb7b37ae1" Nov 25 11:52:38 crc kubenswrapper[4706]: I1125 11:52:38.704490 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e318ee27-6b61-4c03-b697-782b25461b09-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk\" (UID: \"e318ee27-6b61-4c03-b697-782b25461b09\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk" Nov 25 11:52:38 crc kubenswrapper[4706]: I1125 11:52:38.711281 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e318ee27-6b61-4c03-b697-782b25461b09-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk\" (UID: \"e318ee27-6b61-4c03-b697-782b25461b09\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk" Nov 25 11:52:38 crc kubenswrapper[4706]: I1125 11:52:38.745668 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk" Nov 25 11:52:39 crc kubenswrapper[4706]: I1125 11:52:39.110129 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1-webhook-certs\") pod \"openstack-operator-controller-manager-9cb9fb586-5854z\" (UID: \"2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1\") " pod="openstack-operators/openstack-operator-controller-manager-9cb9fb586-5854z" Nov 25 11:52:39 crc kubenswrapper[4706]: I1125 11:52:39.114444 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1-webhook-certs\") pod \"openstack-operator-controller-manager-9cb9fb586-5854z\" (UID: \"2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1\") " pod="openstack-operators/openstack-operator-controller-manager-9cb9fb586-5854z" Nov 25 11:52:39 crc kubenswrapper[4706]: I1125 11:52:39.390172 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-9cb9fb586-5854z" Nov 25 11:52:46 crc kubenswrapper[4706]: I1125 11:52:46.375231 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-9cb9fb586-5854z"] Nov 25 11:52:46 crc kubenswrapper[4706]: W1125 11:52:46.401552 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a90e9e4_814b_4c09_a6d3_f7ad3792f6b1.slice/crio-7b4d0969f8e4d3b7825d26ce9e81f31a5015945c49b81e9f6667b5a0805c93bd WatchSource:0}: Error finding container 7b4d0969f8e4d3b7825d26ce9e81f31a5015945c49b81e9f6667b5a0805c93bd: Status 404 returned error can't find the container with id 7b4d0969f8e4d3b7825d26ce9e81f31a5015945c49b81e9f6667b5a0805c93bd Nov 25 11:52:46 crc kubenswrapper[4706]: I1125 11:52:46.510191 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk"] Nov 25 11:52:46 crc kubenswrapper[4706]: E1125 11:52:46.694368 4706 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-krp7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-68b95954c9-t6c78_openstack-operators(4857e509-acac-422c-87e8-2662708da599): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 11:52:46 crc kubenswrapper[4706]: E1125 11:52:46.695593 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-t6c78" podUID="4857e509-acac-422c-87e8-2662708da599" Nov 25 11:52:46 crc kubenswrapper[4706]: E1125 11:52:46.695997 4706 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2v27n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c9694994-zx4v6_openstack-operators(72bbe536-121d-47c0-b473-2974b238f271): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 11:52:46 crc kubenswrapper[4706]: E1125 11:52:46.696507 4706 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r9zq6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-58bb8d67cc-fslzs_openstack-operators(70fa0d16-065a-463f-8198-06a03414a128): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 11:52:46 crc kubenswrapper[4706]: E1125 11:52:46.697877 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-fslzs" podUID="70fa0d16-065a-463f-8198-06a03414a128" Nov 25 11:52:46 crc kubenswrapper[4706]: E1125 11:52:46.698017 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zx4v6" podUID="72bbe536-121d-47c0-b473-2974b238f271" Nov 25 11:52:46 crc kubenswrapper[4706]: I1125 11:52:46.897632 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-bpcjw" event={"ID":"62e72e86-38e3-4acc-8aa1-664684f27760","Type":"ContainerStarted","Data":"7751303e456ce800516134fda61041e032417ff18b2955ce0bcf84b88c2a204d"} Nov 25 11:52:46 crc kubenswrapper[4706]: I1125 11:52:46.919061 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-rfz7f" event={"ID":"e204aa88-c108-491e-9a73-2fca5c2ef15c","Type":"ContainerStarted","Data":"827f838f0fc8d981651efe078b754d226e3c5f8443dbf18eec0c9b627c35c189"} Nov 25 11:52:46 crc kubenswrapper[4706]: I1125 11:52:46.936159 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-fslzs" event={"ID":"70fa0d16-065a-463f-8198-06a03414a128","Type":"ContainerStarted","Data":"c3ecece762956e22daadc0e6916cc065ea577f8be51b73cfea13e64948dd4ecc"} Nov 25 11:52:46 crc kubenswrapper[4706]: I1125 11:52:46.936402 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-fslzs" Nov 25 11:52:46 crc kubenswrapper[4706]: E1125 11:52:46.939052 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-fslzs" podUID="70fa0d16-065a-463f-8198-06a03414a128" Nov 25 11:52:46 crc kubenswrapper[4706]: I1125 11:52:46.957555 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-9bz4f" event={"ID":"c6de3b19-c207-4c00-8350-de810fb1f555","Type":"ContainerStarted","Data":"d1cebeba280b3a9494646903e1229d60dc042d5fd7291dc89497ceb5c203f034"} Nov 25 11:52:46 crc kubenswrapper[4706]: I1125 11:52:46.973221 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-nf6gr" event={"ID":"6c41fff9-feeb-4311-a7ce-7da3a71b3e9c","Type":"ContainerStarted","Data":"68996614537b4d8b8f9cf530cc12d048f8db2259bff6001bebd61362965c380d"} Nov 25 11:52:47 crc kubenswrapper[4706]: I1125 11:52:47.012317 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-l4m6r" event={"ID":"9e5a3424-dd89-4411-872f-70447506cf73","Type":"ContainerStarted","Data":"3202771902bb36a6847af0f308ec82e7314352f70b8b6e811ceb53ce40e0f466"} Nov 25 11:52:47 crc kubenswrapper[4706]: I1125 11:52:47.022613 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rwbvj" event={"ID":"a0668604-b184-4265-b9af-fc6f526d8351","Type":"ContainerStarted","Data":"bcd613173c6ad5d898feaae3fdc682a81d560c9a5c1a5577993fb3dd790cd961"} Nov 25 11:52:47 crc kubenswrapper[4706]: I1125 11:52:47.045005 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-4bsmv" event={"ID":"ee655c82-6748-4bba-9da4-dcf73e0cff37","Type":"ContainerStarted","Data":"312041d5294c4c4b83b3c55de78ab9601ca611ae7d1a7c6a837f2c832f489f4d"} Nov 25 11:52:47 crc kubenswrapper[4706]: I1125 11:52:47.059917 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zx4v6" event={"ID":"72bbe536-121d-47c0-b473-2974b238f271","Type":"ContainerStarted","Data":"b546f8f61c11277a9e3ec051e9d83bfbe0186407b7fd51031bc317fe61e2643b"} Nov 25 11:52:47 crc kubenswrapper[4706]: I1125 11:52:47.061001 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zx4v6" Nov 25 11:52:47 crc kubenswrapper[4706]: E1125 11:52:47.063504 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zx4v6" podUID="72bbe536-121d-47c0-b473-2974b238f271" Nov 25 11:52:47 crc kubenswrapper[4706]: I1125 11:52:47.072038 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-9cb9fb586-5854z" event={"ID":"2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1","Type":"ContainerStarted","Data":"1e58195af2efe7fbff79413b9c95bbeec15ed12b8f39f76667ab5de3c4ffdf54"} Nov 25 11:52:47 crc kubenswrapper[4706]: I1125 11:52:47.072099 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-9cb9fb586-5854z" event={"ID":"2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1","Type":"ContainerStarted","Data":"7b4d0969f8e4d3b7825d26ce9e81f31a5015945c49b81e9f6667b5a0805c93bd"} Nov 25 11:52:47 crc kubenswrapper[4706]: I1125 11:52:47.072909 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-9cb9fb586-5854z" Nov 25 11:52:47 crc kubenswrapper[4706]: I1125 11:52:47.088643 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-jh5hc" event={"ID":"23155e14-a775-48c5-adf9-55dcfd008040","Type":"ContainerStarted","Data":"c2c4e1bb27ca7d9c5c5b1c7f8f4ed76c65b60e421b1f9b74443af46355e7dbac"} Nov 25 11:52:47 crc kubenswrapper[4706]: I1125 11:52:47.097873 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-nc6f7" event={"ID":"61b1ec50-3228-43bc-bb09-d74a7f02be52","Type":"ContainerStarted","Data":"fb68eae3767f5e42de2dc8e408ae9722d3ce773a6ebbed0bfcd8c3393c4e1608"} Nov 25 11:52:47 crc kubenswrapper[4706]: I1125 11:52:47.101899 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk" event={"ID":"e318ee27-6b61-4c03-b697-782b25461b09","Type":"ContainerStarted","Data":"412927c3cd81f03321f343fc34215c42bc055527a6105b84082d53ae063bd772"} Nov 25 11:52:47 crc kubenswrapper[4706]: I1125 11:52:47.114292 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tfn29" event={"ID":"3c582966-ab32-499d-8f1c-95c942dd6bb4","Type":"ContainerStarted","Data":"1c4344b8b04c4ceec82bad456d74fd47040eef6a9f76f1d60a95a4a90b0fdad9"} Nov 25 11:52:47 crc kubenswrapper[4706]: I1125 11:52:47.122425 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hqsp5" event={"ID":"9fa65252-7bf5-4e83-beb7-dfcfa63db10d","Type":"ContainerStarted","Data":"126cca5a246b8e52e5ac0d4a31f6fa218a7942f9dad0193ce336826b864a793e"} Nov 25 11:52:47 crc kubenswrapper[4706]: I1125 11:52:47.137932 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-t6c78" event={"ID":"4857e509-acac-422c-87e8-2662708da599","Type":"ContainerStarted","Data":"a3fab4850794bd28ca3ba88d877ddf98f3e4822e0f4620b74501334d09426807"} Nov 25 11:52:47 crc kubenswrapper[4706]: I1125 11:52:47.138969 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-t6c78" Nov 25 11:52:47 crc kubenswrapper[4706]: E1125 11:52:47.143857 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-t6c78" podUID="4857e509-acac-422c-87e8-2662708da599" Nov 25 11:52:47 crc kubenswrapper[4706]: I1125 11:52:47.258770 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-9cb9fb586-5854z" podStartSLOduration=12.258750481 podStartE2EDuration="12.258750481s" podCreationTimestamp="2025-11-25 11:52:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:52:47.177028354 +0000 UTC m=+976.091585755" watchObservedRunningTime="2025-11-25 11:52:47.258750481 +0000 UTC m=+976.173307862" Nov 25 11:52:48 crc kubenswrapper[4706]: E1125 11:52:48.146818 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zx4v6" podUID="72bbe536-121d-47c0-b473-2974b238f271" Nov 25 11:52:48 crc kubenswrapper[4706]: E1125 11:52:48.146874 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-fslzs" podUID="70fa0d16-065a-463f-8198-06a03414a128" Nov 25 11:52:48 crc kubenswrapper[4706]: E1125 11:52:48.148245 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-t6c78" podUID="4857e509-acac-422c-87e8-2662708da599" Nov 25 11:52:50 crc kubenswrapper[4706]: I1125 11:52:50.162039 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-bpcjw" event={"ID":"62e72e86-38e3-4acc-8aa1-664684f27760","Type":"ContainerStarted","Data":"909406bdc7b2e6320328db137c3cd115c29fd769402a815b405b020beb7b5635"} Nov 25 11:52:50 crc kubenswrapper[4706]: I1125 11:52:50.163520 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-bpcjw" Nov 25 11:52:50 crc kubenswrapper[4706]: I1125 11:52:50.169772 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-nc6f7" event={"ID":"61b1ec50-3228-43bc-bb09-d74a7f02be52","Type":"ContainerStarted","Data":"078ca2fb9fa8e364f58f7f90a8f7bf720568bbadc42a8a98a5cd1b2c79e36667"} Nov 25 11:52:50 crc kubenswrapper[4706]: I1125 11:52:50.169926 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-nc6f7" Nov 25 11:52:50 crc kubenswrapper[4706]: I1125 11:52:50.172353 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-l4m6r" event={"ID":"9e5a3424-dd89-4411-872f-70447506cf73","Type":"ContainerStarted","Data":"f12154ea1c539d2fd959029f06c56bfb2d57d596485601289f358e86df9f83a7"} Nov 25 11:52:50 crc kubenswrapper[4706]: I1125 11:52:50.172464 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-l4m6r" Nov 25 11:52:50 crc kubenswrapper[4706]: I1125 11:52:50.175460 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tfn29" event={"ID":"3c582966-ab32-499d-8f1c-95c942dd6bb4","Type":"ContainerStarted","Data":"62286e4354a17795edeba7682cd4436f131fa6d194511def37ebe24d31ac2f89"} Nov 25 11:52:50 crc kubenswrapper[4706]: I1125 11:52:50.176067 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tfn29" Nov 25 11:52:50 crc kubenswrapper[4706]: I1125 11:52:50.181108 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-bpcjw" podStartSLOduration=2.9890226650000002 podStartE2EDuration="16.181084367s" podCreationTimestamp="2025-11-25 11:52:34 +0000 UTC" firstStartedPulling="2025-11-25 11:52:36.246791646 +0000 UTC m=+965.161349027" lastFinishedPulling="2025-11-25 11:52:49.438853348 +0000 UTC m=+978.353410729" observedRunningTime="2025-11-25 11:52:50.181040786 +0000 UTC m=+979.095598167" watchObservedRunningTime="2025-11-25 11:52:50.181084367 +0000 UTC m=+979.095641748" Nov 25 11:52:50 crc kubenswrapper[4706]: I1125 11:52:50.181965 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-4bsmv" event={"ID":"ee655c82-6748-4bba-9da4-dcf73e0cff37","Type":"ContainerStarted","Data":"f87e474fd1a60bf491197c3bafd7b6acf89f71a673ecd7a5f5a59eed94799905"} Nov 25 11:52:50 crc kubenswrapper[4706]: I1125 11:52:50.182650 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-4bsmv" Nov 25 11:52:50 crc kubenswrapper[4706]: I1125 11:52:50.210465 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-nc6f7" podStartSLOduration=3.214528641 podStartE2EDuration="16.210428276s" podCreationTimestamp="2025-11-25 11:52:34 +0000 UTC" firstStartedPulling="2025-11-25 11:52:36.331686252 +0000 UTC m=+965.246243633" lastFinishedPulling="2025-11-25 11:52:49.327585887 +0000 UTC m=+978.242143268" observedRunningTime="2025-11-25 11:52:50.200607209 +0000 UTC m=+979.115164590" watchObservedRunningTime="2025-11-25 11:52:50.210428276 +0000 UTC m=+979.124985657" Nov 25 11:52:50 crc kubenswrapper[4706]: I1125 11:52:50.225639 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-l4m6r" podStartSLOduration=2.911044327 podStartE2EDuration="16.225610808s" podCreationTimestamp="2025-11-25 11:52:34 +0000 UTC" firstStartedPulling="2025-11-25 11:52:36.298677666 +0000 UTC m=+965.213235047" lastFinishedPulling="2025-11-25 11:52:49.613244147 +0000 UTC m=+978.527801528" observedRunningTime="2025-11-25 11:52:50.22053732 +0000 UTC m=+979.135094701" watchObservedRunningTime="2025-11-25 11:52:50.225610808 +0000 UTC m=+979.140168189" Nov 25 11:52:50 crc kubenswrapper[4706]: I1125 11:52:50.244101 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tfn29" podStartSLOduration=3.232160505 podStartE2EDuration="16.244078143s" podCreationTimestamp="2025-11-25 11:52:34 +0000 UTC" firstStartedPulling="2025-11-25 11:52:36.324220665 +0000 UTC m=+965.238778056" lastFinishedPulling="2025-11-25 11:52:49.336138313 +0000 UTC m=+978.250695694" observedRunningTime="2025-11-25 11:52:50.243442037 +0000 UTC m=+979.157999418" watchObservedRunningTime="2025-11-25 11:52:50.244078143 +0000 UTC m=+979.158635524" Nov 25 11:52:50 crc kubenswrapper[4706]: I1125 11:52:50.266123 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-4bsmv" podStartSLOduration=2.4143251660000002 podStartE2EDuration="16.266090937s" podCreationTimestamp="2025-11-25 11:52:34 +0000 UTC" firstStartedPulling="2025-11-25 11:52:35.495098812 +0000 UTC m=+964.409656193" lastFinishedPulling="2025-11-25 11:52:49.346864583 +0000 UTC m=+978.261421964" observedRunningTime="2025-11-25 11:52:50.259683686 +0000 UTC m=+979.174241067" watchObservedRunningTime="2025-11-25 11:52:50.266090937 +0000 UTC m=+979.180648318" Nov 25 11:52:51 crc kubenswrapper[4706]: I1125 11:52:51.195451 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-nf6gr" event={"ID":"6c41fff9-feeb-4311-a7ce-7da3a71b3e9c","Type":"ContainerStarted","Data":"9fc5664fa6d0994357b5a46f85dd7e92b050e3e9a66323f840abebf7704e9bad"} Nov 25 11:52:51 crc kubenswrapper[4706]: I1125 11:52:51.196137 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-nf6gr" Nov 25 11:52:51 crc kubenswrapper[4706]: I1125 11:52:51.201783 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-nf6gr" Nov 25 11:52:51 crc kubenswrapper[4706]: I1125 11:52:51.203778 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-jh5hc" event={"ID":"23155e14-a775-48c5-adf9-55dcfd008040","Type":"ContainerStarted","Data":"28600faa58808216e50cdd17f9c2b60f3ca377032505d0e75e201230212a9146"} Nov 25 11:52:51 crc kubenswrapper[4706]: I1125 11:52:51.204043 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-jh5hc" Nov 25 11:52:51 crc kubenswrapper[4706]: I1125 11:52:51.213646 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-jh5hc" Nov 25 11:52:51 crc kubenswrapper[4706]: I1125 11:52:51.219318 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-rfz7f" event={"ID":"e204aa88-c108-491e-9a73-2fca5c2ef15c","Type":"ContainerStarted","Data":"b95c955db6e2fc084778bc374710b22aa00bb7d3266e85cd99cea398ed0fccab"} Nov 25 11:52:51 crc kubenswrapper[4706]: I1125 11:52:51.220371 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-rfz7f" Nov 25 11:52:51 crc kubenswrapper[4706]: I1125 11:52:51.222655 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-nf6gr" podStartSLOduration=3.071616204 podStartE2EDuration="17.222637099s" podCreationTimestamp="2025-11-25 11:52:34 +0000 UTC" firstStartedPulling="2025-11-25 11:52:36.239205686 +0000 UTC m=+965.153763067" lastFinishedPulling="2025-11-25 11:52:50.390226571 +0000 UTC m=+979.304783962" observedRunningTime="2025-11-25 11:52:51.220747422 +0000 UTC m=+980.135304803" watchObservedRunningTime="2025-11-25 11:52:51.222637099 +0000 UTC m=+980.137194490" Nov 25 11:52:51 crc kubenswrapper[4706]: I1125 11:52:51.227807 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-rfz7f" Nov 25 11:52:51 crc kubenswrapper[4706]: I1125 11:52:51.249084 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-rfz7f" podStartSLOduration=3.8464568850000003 podStartE2EDuration="17.249057404s" podCreationTimestamp="2025-11-25 11:52:34 +0000 UTC" firstStartedPulling="2025-11-25 11:52:36.233264637 +0000 UTC m=+965.147822018" lastFinishedPulling="2025-11-25 11:52:49.635865156 +0000 UTC m=+978.550422537" observedRunningTime="2025-11-25 11:52:51.241275058 +0000 UTC m=+980.155832459" watchObservedRunningTime="2025-11-25 11:52:51.249057404 +0000 UTC m=+980.163614785" Nov 25 11:52:51 crc kubenswrapper[4706]: I1125 11:52:51.253470 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk" event={"ID":"e318ee27-6b61-4c03-b697-782b25461b09","Type":"ContainerStarted","Data":"f0b8aa74316183cc399ae83e639bcc64ace04e246e00c86f9b222c3e9716d47b"} Nov 25 11:52:51 crc kubenswrapper[4706]: I1125 11:52:51.253547 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk" event={"ID":"e318ee27-6b61-4c03-b697-782b25461b09","Type":"ContainerStarted","Data":"3ff4e5f3eae0eb946dff910e13d82ce4a133911ccc1ff40a91d57e525b023640"} Nov 25 11:52:51 crc kubenswrapper[4706]: I1125 11:52:51.253592 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk" Nov 25 11:52:51 crc kubenswrapper[4706]: I1125 11:52:51.272692 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-jh5hc" podStartSLOduration=3.400918422 podStartE2EDuration="17.272665698s" podCreationTimestamp="2025-11-25 11:52:34 +0000 UTC" firstStartedPulling="2025-11-25 11:52:35.762064148 +0000 UTC m=+964.676621519" lastFinishedPulling="2025-11-25 11:52:49.633811414 +0000 UTC m=+978.548368795" observedRunningTime="2025-11-25 11:52:51.266518724 +0000 UTC m=+980.181076105" watchObservedRunningTime="2025-11-25 11:52:51.272665698 +0000 UTC m=+980.187223079" Nov 25 11:52:51 crc kubenswrapper[4706]: I1125 11:52:51.282554 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rwbvj" event={"ID":"a0668604-b184-4265-b9af-fc6f526d8351","Type":"ContainerStarted","Data":"1666c6585c167e483bca773c15db1c48f80bf1b85792deaea96184bbb54e9880"} Nov 25 11:52:51 crc kubenswrapper[4706]: I1125 11:52:51.284411 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rwbvj" Nov 25 11:52:51 crc kubenswrapper[4706]: I1125 11:52:51.290587 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rwbvj" Nov 25 11:52:51 crc kubenswrapper[4706]: I1125 11:52:51.303900 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hqsp5" event={"ID":"9fa65252-7bf5-4e83-beb7-dfcfa63db10d","Type":"ContainerStarted","Data":"e0357f2e315f250b1477c01d8686f62cfe942c25208f95d015829c6b6dbaf484"} Nov 25 11:52:51 crc kubenswrapper[4706]: I1125 11:52:51.306429 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hqsp5" Nov 25 11:52:51 crc kubenswrapper[4706]: I1125 11:52:51.309988 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hqsp5" Nov 25 11:52:51 crc kubenswrapper[4706]: I1125 11:52:51.320154 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-9bz4f" event={"ID":"c6de3b19-c207-4c00-8350-de810fb1f555","Type":"ContainerStarted","Data":"73abfb1625a5fc4a66abc09328ad065b96ca56c5100ce7d7ac19169030a972a2"} Nov 25 11:52:51 crc kubenswrapper[4706]: I1125 11:52:51.320200 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-9bz4f" Nov 25 11:52:51 crc kubenswrapper[4706]: I1125 11:52:51.326431 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rwbvj" podStartSLOduration=3.46186563 podStartE2EDuration="17.326400591s" podCreationTimestamp="2025-11-25 11:52:34 +0000 UTC" firstStartedPulling="2025-11-25 11:52:36.458789096 +0000 UTC m=+965.373346477" lastFinishedPulling="2025-11-25 11:52:50.323324057 +0000 UTC m=+979.237881438" observedRunningTime="2025-11-25 11:52:51.314144102 +0000 UTC m=+980.228701483" watchObservedRunningTime="2025-11-25 11:52:51.326400591 +0000 UTC m=+980.240957972" Nov 25 11:52:51 crc kubenswrapper[4706]: I1125 11:52:51.333258 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-4bsmv" Nov 25 11:52:51 crc kubenswrapper[4706]: I1125 11:52:51.333739 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-bpcjw" Nov 25 11:52:51 crc kubenswrapper[4706]: I1125 11:52:51.334015 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-nc6f7" Nov 25 11:52:51 crc kubenswrapper[4706]: I1125 11:52:51.334248 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-774b86978c-9bz4f" Nov 25 11:52:51 crc kubenswrapper[4706]: I1125 11:52:51.334344 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tfn29" Nov 25 11:52:51 crc kubenswrapper[4706]: I1125 11:52:51.335834 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-l4m6r" Nov 25 11:52:51 crc kubenswrapper[4706]: I1125 11:52:51.351631 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk" podStartSLOduration=13.446997378 podStartE2EDuration="17.351611805s" podCreationTimestamp="2025-11-25 11:52:34 +0000 UTC" firstStartedPulling="2025-11-25 11:52:46.549731858 +0000 UTC m=+975.464289239" lastFinishedPulling="2025-11-25 11:52:50.454346285 +0000 UTC m=+979.368903666" observedRunningTime="2025-11-25 11:52:51.348481796 +0000 UTC m=+980.263039197" watchObservedRunningTime="2025-11-25 11:52:51.351611805 +0000 UTC m=+980.266169186" Nov 25 11:52:51 crc kubenswrapper[4706]: I1125 11:52:51.469052 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hqsp5" podStartSLOduration=2.909905182 podStartE2EDuration="17.46903317s" podCreationTimestamp="2025-11-25 11:52:34 +0000 UTC" firstStartedPulling="2025-11-25 11:52:35.77773254 +0000 UTC m=+964.692289921" lastFinishedPulling="2025-11-25 11:52:50.336860528 +0000 UTC m=+979.251417909" observedRunningTime="2025-11-25 11:52:51.465595074 +0000 UTC m=+980.380152455" watchObservedRunningTime="2025-11-25 11:52:51.46903317 +0000 UTC m=+980.383590551" Nov 25 11:52:51 crc kubenswrapper[4706]: I1125 11:52:51.572919 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-774b86978c-9bz4f" podStartSLOduration=3.339612651 podStartE2EDuration="17.572901184s" podCreationTimestamp="2025-11-25 11:52:34 +0000 UTC" firstStartedPulling="2025-11-25 11:52:36.2229852 +0000 UTC m=+965.137542581" lastFinishedPulling="2025-11-25 11:52:50.456273733 +0000 UTC m=+979.370831114" observedRunningTime="2025-11-25 11:52:51.571807967 +0000 UTC m=+980.486365358" watchObservedRunningTime="2025-11-25 11:52:51.572901184 +0000 UTC m=+980.487458555" Nov 25 11:52:54 crc kubenswrapper[4706]: I1125 11:52:54.866213 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zx4v6" Nov 25 11:52:55 crc kubenswrapper[4706]: I1125 11:52:55.049632 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-t6c78" Nov 25 11:52:55 crc kubenswrapper[4706]: I1125 11:52:55.140922 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-fslzs" Nov 25 11:52:58 crc kubenswrapper[4706]: I1125 11:52:58.756844 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk" Nov 25 11:52:59 crc kubenswrapper[4706]: I1125 11:52:59.399259 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-9cb9fb586-5854z" Nov 25 11:53:01 crc kubenswrapper[4706]: I1125 11:53:01.125579 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 11:53:01 crc kubenswrapper[4706]: I1125 11:53:01.126005 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 11:53:01 crc kubenswrapper[4706]: I1125 11:53:01.126056 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" Nov 25 11:53:01 crc kubenswrapper[4706]: I1125 11:53:01.126611 4706 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fdd2404bf73191f443033ee21a4507eceb1c00713641b2459642f00fc3611d21"} pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 11:53:01 crc kubenswrapper[4706]: I1125 11:53:01.126675 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" containerID="cri-o://fdd2404bf73191f443033ee21a4507eceb1c00713641b2459642f00fc3611d21" gracePeriod=600 Nov 25 11:53:01 crc kubenswrapper[4706]: I1125 11:53:01.681350 4706 generic.go:334] "Generic (PLEG): container finished" podID="0930887a-320c-4506-8c9c-f94d6d64516a" containerID="fdd2404bf73191f443033ee21a4507eceb1c00713641b2459642f00fc3611d21" exitCode=0 Nov 25 11:53:01 crc kubenswrapper[4706]: I1125 11:53:01.682015 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" event={"ID":"0930887a-320c-4506-8c9c-f94d6d64516a","Type":"ContainerDied","Data":"fdd2404bf73191f443033ee21a4507eceb1c00713641b2459642f00fc3611d21"} Nov 25 11:53:01 crc kubenswrapper[4706]: I1125 11:53:01.682092 4706 scope.go:117] "RemoveContainer" containerID="683756e714349294998bf9e4fc9b79c9b932ba51c675e9492a76d30885edc873" Nov 25 11:53:01 crc kubenswrapper[4706]: I1125 11:53:01.696530 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-9s7hm" event={"ID":"6b8e15c0-a70f-4b4c-8836-a2c4e7b23f60","Type":"ContainerStarted","Data":"727cae160d2cb4b5f6c7224c124e4155d9df0a57e91d16999aed01ca19639ca4"} Nov 25 11:53:01 crc kubenswrapper[4706]: I1125 11:53:01.698377 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-864885998-9s7hm" Nov 25 11:53:01 crc kubenswrapper[4706]: I1125 11:53:01.753366 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-864885998-9s7hm" podStartSLOduration=3.526470373 podStartE2EDuration="27.753334733s" podCreationTimestamp="2025-11-25 11:52:34 +0000 UTC" firstStartedPulling="2025-11-25 11:52:36.478768116 +0000 UTC m=+965.393325497" lastFinishedPulling="2025-11-25 11:53:00.705632476 +0000 UTC m=+989.620189857" observedRunningTime="2025-11-25 11:53:01.740361307 +0000 UTC m=+990.654918698" watchObservedRunningTime="2025-11-25 11:53:01.753334733 +0000 UTC m=+990.667892114" Nov 25 11:53:01 crc kubenswrapper[4706]: I1125 11:53:01.755779 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k7crl" event={"ID":"eab1279c-c99a-450e-887b-d246a2ff01aa","Type":"ContainerStarted","Data":"4ca4e7c0e055e838b2e9c0e7feb7c364b63e483df6de04458d27375c9312ad1f"} Nov 25 11:53:01 crc kubenswrapper[4706]: I1125 11:53:01.755844 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k7crl" event={"ID":"eab1279c-c99a-450e-887b-d246a2ff01aa","Type":"ContainerStarted","Data":"6e494fc4eee18671df20af8ca16e5f73ab527d03d690991a98dcaed58360434d"} Nov 25 11:53:01 crc kubenswrapper[4706]: I1125 11:53:01.756717 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k7crl" Nov 25 11:53:01 crc kubenswrapper[4706]: I1125 11:53:01.790924 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-t6c78" event={"ID":"4857e509-acac-422c-87e8-2662708da599","Type":"ContainerStarted","Data":"d34a1271000cf59c70dad22a8dce64ecb12b090b1b9e08cbd5598f2c5de5463a"} Nov 25 11:53:01 crc kubenswrapper[4706]: I1125 11:53:01.794745 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k7crl" podStartSLOduration=3.288496792 podStartE2EDuration="27.794724365s" podCreationTimestamp="2025-11-25 11:52:34 +0000 UTC" firstStartedPulling="2025-11-25 11:52:36.334063112 +0000 UTC m=+965.248620493" lastFinishedPulling="2025-11-25 11:53:00.840290685 +0000 UTC m=+989.754848066" observedRunningTime="2025-11-25 11:53:01.791688978 +0000 UTC m=+990.706246359" watchObservedRunningTime="2025-11-25 11:53:01.794724365 +0000 UTC m=+990.709281746" Nov 25 11:53:01 crc kubenswrapper[4706]: I1125 11:53:01.816060 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zx4v6" event={"ID":"72bbe536-121d-47c0-b473-2974b238f271","Type":"ContainerStarted","Data":"fc86140dc3671c0c349966335c652abd094beb91b2ff5e99f0cfc0fd301c5cc1"} Nov 25 11:53:01 crc kubenswrapper[4706]: I1125 11:53:01.826694 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-x9x4q" event={"ID":"5726a389-32eb-4f0c-938b-6f2ddbb762e7","Type":"ContainerStarted","Data":"a4f76f11e3a12d3ed74cd38d05e887277ad85a13b0e7f5c7c2a40389bbde69f2"} Nov 25 11:53:01 crc kubenswrapper[4706]: I1125 11:53:01.831900 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-t6c78" podStartSLOduration=18.127650686 podStartE2EDuration="27.83188066s" podCreationTimestamp="2025-11-25 11:52:34 +0000 UTC" firstStartedPulling="2025-11-25 11:52:36.249421622 +0000 UTC m=+965.163979003" lastFinishedPulling="2025-11-25 11:52:45.953651596 +0000 UTC m=+974.868208977" observedRunningTime="2025-11-25 11:53:01.826263548 +0000 UTC m=+990.740820929" watchObservedRunningTime="2025-11-25 11:53:01.83188066 +0000 UTC m=+990.746438041" Nov 25 11:53:01 crc kubenswrapper[4706]: I1125 11:53:01.832673 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-8rlr7" event={"ID":"d256078e-afd5-4218-ad5c-d5211eb846a8","Type":"ContainerStarted","Data":"f598571c9af3c528456b4d48c688d467bb4a6bd6f39e79cfac7762152ff566a9"} Nov 25 11:53:01 crc kubenswrapper[4706]: I1125 11:53:01.833939 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-fslzs" event={"ID":"70fa0d16-065a-463f-8198-06a03414a128","Type":"ContainerStarted","Data":"16712a4cadd3093a281240400173186d534267b1ce37f5b5e4f91945e990757f"} Nov 25 11:53:01 crc kubenswrapper[4706]: I1125 11:53:01.853185 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-2tmzq" event={"ID":"063b2f44-faa1-4a58-b77b-f2140f569b01","Type":"ContainerStarted","Data":"49818e0aa017978b9575f26dea8f4372beabc3340d17d74cd665f3be1e9757ce"} Nov 25 11:53:01 crc kubenswrapper[4706]: I1125 11:53:01.872023 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-8p5t2" event={"ID":"a7a52f28-6bc4-481d-8513-16dbb7b37ae1","Type":"ContainerStarted","Data":"77644a2d6098f260cde2c4b6551e02ad0c9a9044dcbb8ac87b2c7404dbfc82b3"} Nov 25 11:53:01 crc kubenswrapper[4706]: I1125 11:53:01.880191 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zx4v6" podStartSLOduration=18.252788087 podStartE2EDuration="27.880167145s" podCreationTimestamp="2025-11-25 11:52:34 +0000 UTC" firstStartedPulling="2025-11-25 11:52:36.315523068 +0000 UTC m=+965.230080449" lastFinishedPulling="2025-11-25 11:52:45.942902126 +0000 UTC m=+974.857459507" observedRunningTime="2025-11-25 11:53:01.879641172 +0000 UTC m=+990.794198553" watchObservedRunningTime="2025-11-25 11:53:01.880167145 +0000 UTC m=+990.794724516" Nov 25 11:53:01 crc kubenswrapper[4706]: I1125 11:53:01.945210 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-fslzs" podStartSLOduration=18.33760527 podStartE2EDuration="27.945176491s" podCreationTimestamp="2025-11-25 11:52:34 +0000 UTC" firstStartedPulling="2025-11-25 11:52:36.331021046 +0000 UTC m=+965.245578427" lastFinishedPulling="2025-11-25 11:52:45.938592267 +0000 UTC m=+974.853149648" observedRunningTime="2025-11-25 11:53:01.913760291 +0000 UTC m=+990.828317682" watchObservedRunningTime="2025-11-25 11:53:01.945176491 +0000 UTC m=+990.859733872" Nov 25 11:53:01 crc kubenswrapper[4706]: I1125 11:53:01.962209 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-f47gl" event={"ID":"1c035858-a349-4415-8a5d-f3f2edb7c84e","Type":"ContainerStarted","Data":"b5668e24c52cbb8f3ecf02f7fbbebb42713a3ff64e9d059836e36053d49db4a1"} Nov 25 11:53:01 crc kubenswrapper[4706]: I1125 11:53:01.963699 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-f47gl" Nov 25 11:53:01 crc kubenswrapper[4706]: I1125 11:53:01.976347 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-x9x4q" podStartSLOduration=2.759977798 podStartE2EDuration="26.976320225s" podCreationTimestamp="2025-11-25 11:52:35 +0000 UTC" firstStartedPulling="2025-11-25 11:52:36.494410988 +0000 UTC m=+965.408968369" lastFinishedPulling="2025-11-25 11:53:00.710753415 +0000 UTC m=+989.625310796" observedRunningTime="2025-11-25 11:53:01.965906213 +0000 UTC m=+990.880463594" watchObservedRunningTime="2025-11-25 11:53:01.976320225 +0000 UTC m=+990.890877606" Nov 25 11:53:02 crc kubenswrapper[4706]: I1125 11:53:02.941257 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-2tmzq" event={"ID":"063b2f44-faa1-4a58-b77b-f2140f569b01","Type":"ContainerStarted","Data":"8f469dda2c5c82d5354dc540571deee131fd88880c89be9b6932e55e95bbbb39"} Nov 25 11:53:02 crc kubenswrapper[4706]: I1125 11:53:02.941790 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-2tmzq" Nov 25 11:53:02 crc kubenswrapper[4706]: I1125 11:53:02.943449 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-9s7hm" event={"ID":"6b8e15c0-a70f-4b4c-8836-a2c4e7b23f60","Type":"ContainerStarted","Data":"250ca636ff19397314bc74461b12edf3f880e12349b741b0b9d3a9caa3b8d99c"} Nov 25 11:53:02 crc kubenswrapper[4706]: I1125 11:53:02.945645 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-8p5t2" event={"ID":"a7a52f28-6bc4-481d-8513-16dbb7b37ae1","Type":"ContainerStarted","Data":"4f9f86eef068bffca16bd9b97fd852a048e7bce5a60c7d45407300bdba1d391d"} Nov 25 11:53:02 crc kubenswrapper[4706]: I1125 11:53:02.945733 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-8p5t2" Nov 25 11:53:02 crc kubenswrapper[4706]: I1125 11:53:02.948072 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-f47gl" event={"ID":"1c035858-a349-4415-8a5d-f3f2edb7c84e","Type":"ContainerStarted","Data":"a6ad3fa033239c7ad81c49797f64f49ced7bd0315da57ce0b1d46aedb3f35236"} Nov 25 11:53:02 crc kubenswrapper[4706]: I1125 11:53:02.950472 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" event={"ID":"0930887a-320c-4506-8c9c-f94d6d64516a","Type":"ContainerStarted","Data":"11a32543eabb96f028f5772afd04ba615397c2a8e9b4fc94ea299c44af45edfc"} Nov 25 11:53:02 crc kubenswrapper[4706]: I1125 11:53:02.956238 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-8rlr7" event={"ID":"d256078e-afd5-4218-ad5c-d5211eb846a8","Type":"ContainerStarted","Data":"a67f42b8266d70cbcff6663ab7d7f4381eb53b1451d8f06f3b99a487559175b7"} Nov 25 11:53:02 crc kubenswrapper[4706]: I1125 11:53:02.956282 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cb74df96-8rlr7" Nov 25 11:53:02 crc kubenswrapper[4706]: I1125 11:53:02.965396 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-f47gl" podStartSLOduration=4.79915497 podStartE2EDuration="28.965370186s" podCreationTimestamp="2025-11-25 11:52:34 +0000 UTC" firstStartedPulling="2025-11-25 11:52:36.498647884 +0000 UTC m=+965.413205265" lastFinishedPulling="2025-11-25 11:53:00.6648631 +0000 UTC m=+989.579420481" observedRunningTime="2025-11-25 11:53:02.013726966 +0000 UTC m=+990.928284347" watchObservedRunningTime="2025-11-25 11:53:02.965370186 +0000 UTC m=+991.879927567" Nov 25 11:53:02 crc kubenswrapper[4706]: I1125 11:53:02.991099 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-2tmzq" podStartSLOduration=6.066696614 podStartE2EDuration="28.991075923s" podCreationTimestamp="2025-11-25 11:52:34 +0000 UTC" firstStartedPulling="2025-11-25 11:52:36.469206937 +0000 UTC m=+965.383764318" lastFinishedPulling="2025-11-25 11:52:59.393586246 +0000 UTC m=+988.308143627" observedRunningTime="2025-11-25 11:53:02.964937786 +0000 UTC m=+991.879495167" watchObservedRunningTime="2025-11-25 11:53:02.991075923 +0000 UTC m=+991.905633304" Nov 25 11:53:02 crc kubenswrapper[4706]: I1125 11:53:02.992997 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-8p5t2" podStartSLOduration=4.619785846 podStartE2EDuration="28.992984321s" podCreationTimestamp="2025-11-25 11:52:34 +0000 UTC" firstStartedPulling="2025-11-25 11:52:36.490781307 +0000 UTC m=+965.405338688" lastFinishedPulling="2025-11-25 11:53:00.863979782 +0000 UTC m=+989.778537163" observedRunningTime="2025-11-25 11:53:02.990608292 +0000 UTC m=+991.905165673" watchObservedRunningTime="2025-11-25 11:53:02.992984321 +0000 UTC m=+991.907541702" Nov 25 11:53:03 crc kubenswrapper[4706]: I1125 11:53:03.029710 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5cb74df96-8rlr7" podStartSLOduration=4.79942077 podStartE2EDuration="29.029685115s" podCreationTimestamp="2025-11-25 11:52:34 +0000 UTC" firstStartedPulling="2025-11-25 11:52:36.475923575 +0000 UTC m=+965.390480956" lastFinishedPulling="2025-11-25 11:53:00.70618792 +0000 UTC m=+989.620745301" observedRunningTime="2025-11-25 11:53:03.025355376 +0000 UTC m=+991.939912757" watchObservedRunningTime="2025-11-25 11:53:03.029685115 +0000 UTC m=+991.944242496" Nov 25 11:53:15 crc kubenswrapper[4706]: I1125 11:53:15.217860 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k7crl" Nov 25 11:53:15 crc kubenswrapper[4706]: I1125 11:53:15.342242 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-f47gl" Nov 25 11:53:15 crc kubenswrapper[4706]: I1125 11:53:15.378112 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-2tmzq" Nov 25 11:53:15 crc kubenswrapper[4706]: I1125 11:53:15.596033 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-8p5t2" Nov 25 11:53:15 crc kubenswrapper[4706]: I1125 11:53:15.634338 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cb74df96-8rlr7" Nov 25 11:53:15 crc kubenswrapper[4706]: I1125 11:53:15.759861 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-864885998-9s7hm" Nov 25 11:53:32 crc kubenswrapper[4706]: I1125 11:53:32.798137 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rf649"] Nov 25 11:53:32 crc kubenswrapper[4706]: I1125 11:53:32.800031 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-rf649" Nov 25 11:53:32 crc kubenswrapper[4706]: I1125 11:53:32.803094 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 25 11:53:32 crc kubenswrapper[4706]: I1125 11:53:32.803120 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 25 11:53:32 crc kubenswrapper[4706]: I1125 11:53:32.803214 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-5qhcc" Nov 25 11:53:32 crc kubenswrapper[4706]: I1125 11:53:32.803513 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 25 11:53:32 crc kubenswrapper[4706]: I1125 11:53:32.835726 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rf649"] Nov 25 11:53:32 crc kubenswrapper[4706]: I1125 11:53:32.894668 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8nl6d"] Nov 25 11:53:32 crc kubenswrapper[4706]: I1125 11:53:32.896075 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8nl6d" Nov 25 11:53:32 crc kubenswrapper[4706]: I1125 11:53:32.898913 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 25 11:53:32 crc kubenswrapper[4706]: I1125 11:53:32.905841 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8nl6d"] Nov 25 11:53:32 crc kubenswrapper[4706]: I1125 11:53:32.906362 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/257a89c8-b58c-44ea-9e51-b40a35f5e08f-config\") pod \"dnsmasq-dns-675f4bcbfc-rf649\" (UID: \"257a89c8-b58c-44ea-9e51-b40a35f5e08f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rf649" Nov 25 11:53:32 crc kubenswrapper[4706]: I1125 11:53:32.906533 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fws6\" (UniqueName: \"kubernetes.io/projected/257a89c8-b58c-44ea-9e51-b40a35f5e08f-kube-api-access-7fws6\") pod \"dnsmasq-dns-675f4bcbfc-rf649\" (UID: \"257a89c8-b58c-44ea-9e51-b40a35f5e08f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rf649" Nov 25 11:53:33 crc kubenswrapper[4706]: I1125 11:53:33.008394 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fws6\" (UniqueName: \"kubernetes.io/projected/257a89c8-b58c-44ea-9e51-b40a35f5e08f-kube-api-access-7fws6\") pod \"dnsmasq-dns-675f4bcbfc-rf649\" (UID: \"257a89c8-b58c-44ea-9e51-b40a35f5e08f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rf649" Nov 25 11:53:33 crc kubenswrapper[4706]: I1125 11:53:33.009097 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88a1c39b-1b4a-4227-bb11-a80bdb52b74b-config\") pod \"dnsmasq-dns-78dd6ddcc-8nl6d\" (UID: \"88a1c39b-1b4a-4227-bb11-a80bdb52b74b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8nl6d" Nov 25 11:53:33 crc kubenswrapper[4706]: I1125 11:53:33.009189 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/257a89c8-b58c-44ea-9e51-b40a35f5e08f-config\") pod \"dnsmasq-dns-675f4bcbfc-rf649\" (UID: \"257a89c8-b58c-44ea-9e51-b40a35f5e08f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rf649" Nov 25 11:53:33 crc kubenswrapper[4706]: I1125 11:53:33.009288 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88a1c39b-1b4a-4227-bb11-a80bdb52b74b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-8nl6d\" (UID: \"88a1c39b-1b4a-4227-bb11-a80bdb52b74b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8nl6d" Nov 25 11:53:33 crc kubenswrapper[4706]: I1125 11:53:33.009430 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qklzt\" (UniqueName: \"kubernetes.io/projected/88a1c39b-1b4a-4227-bb11-a80bdb52b74b-kube-api-access-qklzt\") pod \"dnsmasq-dns-78dd6ddcc-8nl6d\" (UID: \"88a1c39b-1b4a-4227-bb11-a80bdb52b74b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8nl6d" Nov 25 11:53:33 crc kubenswrapper[4706]: I1125 11:53:33.010351 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/257a89c8-b58c-44ea-9e51-b40a35f5e08f-config\") pod \"dnsmasq-dns-675f4bcbfc-rf649\" (UID: \"257a89c8-b58c-44ea-9e51-b40a35f5e08f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rf649" Nov 25 11:53:33 crc kubenswrapper[4706]: I1125 11:53:33.032877 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fws6\" (UniqueName: \"kubernetes.io/projected/257a89c8-b58c-44ea-9e51-b40a35f5e08f-kube-api-access-7fws6\") pod \"dnsmasq-dns-675f4bcbfc-rf649\" (UID: \"257a89c8-b58c-44ea-9e51-b40a35f5e08f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rf649" Nov 25 11:53:33 crc kubenswrapper[4706]: I1125 11:53:33.110682 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88a1c39b-1b4a-4227-bb11-a80bdb52b74b-config\") pod \"dnsmasq-dns-78dd6ddcc-8nl6d\" (UID: \"88a1c39b-1b4a-4227-bb11-a80bdb52b74b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8nl6d" Nov 25 11:53:33 crc kubenswrapper[4706]: I1125 11:53:33.110767 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88a1c39b-1b4a-4227-bb11-a80bdb52b74b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-8nl6d\" (UID: \"88a1c39b-1b4a-4227-bb11-a80bdb52b74b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8nl6d" Nov 25 11:53:33 crc kubenswrapper[4706]: I1125 11:53:33.110818 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qklzt\" (UniqueName: \"kubernetes.io/projected/88a1c39b-1b4a-4227-bb11-a80bdb52b74b-kube-api-access-qklzt\") pod \"dnsmasq-dns-78dd6ddcc-8nl6d\" (UID: \"88a1c39b-1b4a-4227-bb11-a80bdb52b74b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8nl6d" Nov 25 11:53:33 crc kubenswrapper[4706]: I1125 11:53:33.112361 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88a1c39b-1b4a-4227-bb11-a80bdb52b74b-config\") pod \"dnsmasq-dns-78dd6ddcc-8nl6d\" (UID: \"88a1c39b-1b4a-4227-bb11-a80bdb52b74b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8nl6d" Nov 25 11:53:33 crc kubenswrapper[4706]: I1125 11:53:33.112762 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88a1c39b-1b4a-4227-bb11-a80bdb52b74b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-8nl6d\" (UID: \"88a1c39b-1b4a-4227-bb11-a80bdb52b74b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8nl6d" Nov 25 11:53:33 crc kubenswrapper[4706]: I1125 11:53:33.121108 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-rf649" Nov 25 11:53:33 crc kubenswrapper[4706]: I1125 11:53:33.143994 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qklzt\" (UniqueName: \"kubernetes.io/projected/88a1c39b-1b4a-4227-bb11-a80bdb52b74b-kube-api-access-qklzt\") pod \"dnsmasq-dns-78dd6ddcc-8nl6d\" (UID: \"88a1c39b-1b4a-4227-bb11-a80bdb52b74b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8nl6d" Nov 25 11:53:33 crc kubenswrapper[4706]: I1125 11:53:33.214413 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8nl6d" Nov 25 11:53:33 crc kubenswrapper[4706]: I1125 11:53:33.559546 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rf649"] Nov 25 11:53:33 crc kubenswrapper[4706]: W1125 11:53:33.561958 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod257a89c8_b58c_44ea_9e51_b40a35f5e08f.slice/crio-a2f81a92dd331de7b779062ab6c1ad2a28b448fa5e8c2e6537bbf8d552091ae4 WatchSource:0}: Error finding container a2f81a92dd331de7b779062ab6c1ad2a28b448fa5e8c2e6537bbf8d552091ae4: Status 404 returned error can't find the container with id a2f81a92dd331de7b779062ab6c1ad2a28b448fa5e8c2e6537bbf8d552091ae4 Nov 25 11:53:33 crc kubenswrapper[4706]: I1125 11:53:33.564649 4706 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 11:53:33 crc kubenswrapper[4706]: I1125 11:53:33.679231 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8nl6d"] Nov 25 11:53:34 crc kubenswrapper[4706]: I1125 11:53:34.201969 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-8nl6d" event={"ID":"88a1c39b-1b4a-4227-bb11-a80bdb52b74b","Type":"ContainerStarted","Data":"8e1e05197eca252c97bf1d56b4dcf76fb24810b987aba5e5a8fa2792b367956c"} Nov 25 11:53:34 crc kubenswrapper[4706]: I1125 11:53:34.203596 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-rf649" event={"ID":"257a89c8-b58c-44ea-9e51-b40a35f5e08f","Type":"ContainerStarted","Data":"a2f81a92dd331de7b779062ab6c1ad2a28b448fa5e8c2e6537bbf8d552091ae4"} Nov 25 11:53:35 crc kubenswrapper[4706]: I1125 11:53:35.990765 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rf649"] Nov 25 11:53:36 crc kubenswrapper[4706]: I1125 11:53:36.022509 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zfbpp"] Nov 25 11:53:36 crc kubenswrapper[4706]: I1125 11:53:36.024642 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zfbpp" Nov 25 11:53:36 crc kubenswrapper[4706]: I1125 11:53:36.043512 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zfbpp"] Nov 25 11:53:36 crc kubenswrapper[4706]: I1125 11:53:36.170057 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1f830dd-11b4-4ef5-bec1-796d0c51c8bb-config\") pod \"dnsmasq-dns-666b6646f7-zfbpp\" (UID: \"d1f830dd-11b4-4ef5-bec1-796d0c51c8bb\") " pod="openstack/dnsmasq-dns-666b6646f7-zfbpp" Nov 25 11:53:36 crc kubenswrapper[4706]: I1125 11:53:36.170640 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97qnn\" (UniqueName: \"kubernetes.io/projected/d1f830dd-11b4-4ef5-bec1-796d0c51c8bb-kube-api-access-97qnn\") pod \"dnsmasq-dns-666b6646f7-zfbpp\" (UID: \"d1f830dd-11b4-4ef5-bec1-796d0c51c8bb\") " pod="openstack/dnsmasq-dns-666b6646f7-zfbpp" Nov 25 11:53:36 crc kubenswrapper[4706]: I1125 11:53:36.170679 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1f830dd-11b4-4ef5-bec1-796d0c51c8bb-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zfbpp\" (UID: \"d1f830dd-11b4-4ef5-bec1-796d0c51c8bb\") " pod="openstack/dnsmasq-dns-666b6646f7-zfbpp" Nov 25 11:53:36 crc kubenswrapper[4706]: I1125 11:53:36.272518 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97qnn\" (UniqueName: \"kubernetes.io/projected/d1f830dd-11b4-4ef5-bec1-796d0c51c8bb-kube-api-access-97qnn\") pod \"dnsmasq-dns-666b6646f7-zfbpp\" (UID: \"d1f830dd-11b4-4ef5-bec1-796d0c51c8bb\") " pod="openstack/dnsmasq-dns-666b6646f7-zfbpp" Nov 25 11:53:36 crc kubenswrapper[4706]: I1125 11:53:36.272581 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1f830dd-11b4-4ef5-bec1-796d0c51c8bb-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zfbpp\" (UID: \"d1f830dd-11b4-4ef5-bec1-796d0c51c8bb\") " pod="openstack/dnsmasq-dns-666b6646f7-zfbpp" Nov 25 11:53:36 crc kubenswrapper[4706]: I1125 11:53:36.272627 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1f830dd-11b4-4ef5-bec1-796d0c51c8bb-config\") pod \"dnsmasq-dns-666b6646f7-zfbpp\" (UID: \"d1f830dd-11b4-4ef5-bec1-796d0c51c8bb\") " pod="openstack/dnsmasq-dns-666b6646f7-zfbpp" Nov 25 11:53:36 crc kubenswrapper[4706]: I1125 11:53:36.273618 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1f830dd-11b4-4ef5-bec1-796d0c51c8bb-config\") pod \"dnsmasq-dns-666b6646f7-zfbpp\" (UID: \"d1f830dd-11b4-4ef5-bec1-796d0c51c8bb\") " pod="openstack/dnsmasq-dns-666b6646f7-zfbpp" Nov 25 11:53:36 crc kubenswrapper[4706]: I1125 11:53:36.273873 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1f830dd-11b4-4ef5-bec1-796d0c51c8bb-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zfbpp\" (UID: \"d1f830dd-11b4-4ef5-bec1-796d0c51c8bb\") " pod="openstack/dnsmasq-dns-666b6646f7-zfbpp" Nov 25 11:53:36 crc kubenswrapper[4706]: I1125 11:53:36.307224 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97qnn\" (UniqueName: \"kubernetes.io/projected/d1f830dd-11b4-4ef5-bec1-796d0c51c8bb-kube-api-access-97qnn\") pod \"dnsmasq-dns-666b6646f7-zfbpp\" (UID: \"d1f830dd-11b4-4ef5-bec1-796d0c51c8bb\") " pod="openstack/dnsmasq-dns-666b6646f7-zfbpp" Nov 25 11:53:36 crc kubenswrapper[4706]: I1125 11:53:36.351488 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zfbpp" Nov 25 11:53:36 crc kubenswrapper[4706]: I1125 11:53:36.364339 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8nl6d"] Nov 25 11:53:36 crc kubenswrapper[4706]: I1125 11:53:36.390925 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-h7s7b"] Nov 25 11:53:36 crc kubenswrapper[4706]: I1125 11:53:36.393461 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-h7s7b" Nov 25 11:53:36 crc kubenswrapper[4706]: I1125 11:53:36.408565 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-h7s7b"] Nov 25 11:53:36 crc kubenswrapper[4706]: I1125 11:53:36.475747 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a9e827f-acb8-4b85-90f6-c5cd8634f430-config\") pod \"dnsmasq-dns-57d769cc4f-h7s7b\" (UID: \"9a9e827f-acb8-4b85-90f6-c5cd8634f430\") " pod="openstack/dnsmasq-dns-57d769cc4f-h7s7b" Nov 25 11:53:36 crc kubenswrapper[4706]: I1125 11:53:36.475799 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-495qr\" (UniqueName: \"kubernetes.io/projected/9a9e827f-acb8-4b85-90f6-c5cd8634f430-kube-api-access-495qr\") pod \"dnsmasq-dns-57d769cc4f-h7s7b\" (UID: \"9a9e827f-acb8-4b85-90f6-c5cd8634f430\") " pod="openstack/dnsmasq-dns-57d769cc4f-h7s7b" Nov 25 11:53:36 crc kubenswrapper[4706]: I1125 11:53:36.475827 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a9e827f-acb8-4b85-90f6-c5cd8634f430-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-h7s7b\" (UID: \"9a9e827f-acb8-4b85-90f6-c5cd8634f430\") " pod="openstack/dnsmasq-dns-57d769cc4f-h7s7b" Nov 25 11:53:36 crc kubenswrapper[4706]: I1125 11:53:36.576924 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a9e827f-acb8-4b85-90f6-c5cd8634f430-config\") pod \"dnsmasq-dns-57d769cc4f-h7s7b\" (UID: \"9a9e827f-acb8-4b85-90f6-c5cd8634f430\") " pod="openstack/dnsmasq-dns-57d769cc4f-h7s7b" Nov 25 11:53:36 crc kubenswrapper[4706]: I1125 11:53:36.577031 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-495qr\" (UniqueName: \"kubernetes.io/projected/9a9e827f-acb8-4b85-90f6-c5cd8634f430-kube-api-access-495qr\") pod \"dnsmasq-dns-57d769cc4f-h7s7b\" (UID: \"9a9e827f-acb8-4b85-90f6-c5cd8634f430\") " pod="openstack/dnsmasq-dns-57d769cc4f-h7s7b" Nov 25 11:53:36 crc kubenswrapper[4706]: I1125 11:53:36.577079 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a9e827f-acb8-4b85-90f6-c5cd8634f430-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-h7s7b\" (UID: \"9a9e827f-acb8-4b85-90f6-c5cd8634f430\") " pod="openstack/dnsmasq-dns-57d769cc4f-h7s7b" Nov 25 11:53:36 crc kubenswrapper[4706]: I1125 11:53:36.578659 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a9e827f-acb8-4b85-90f6-c5cd8634f430-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-h7s7b\" (UID: \"9a9e827f-acb8-4b85-90f6-c5cd8634f430\") " pod="openstack/dnsmasq-dns-57d769cc4f-h7s7b" Nov 25 11:53:36 crc kubenswrapper[4706]: I1125 11:53:36.579835 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a9e827f-acb8-4b85-90f6-c5cd8634f430-config\") pod \"dnsmasq-dns-57d769cc4f-h7s7b\" (UID: \"9a9e827f-acb8-4b85-90f6-c5cd8634f430\") " pod="openstack/dnsmasq-dns-57d769cc4f-h7s7b" Nov 25 11:53:36 crc kubenswrapper[4706]: I1125 11:53:36.623794 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-495qr\" (UniqueName: \"kubernetes.io/projected/9a9e827f-acb8-4b85-90f6-c5cd8634f430-kube-api-access-495qr\") pod \"dnsmasq-dns-57d769cc4f-h7s7b\" (UID: \"9a9e827f-acb8-4b85-90f6-c5cd8634f430\") " pod="openstack/dnsmasq-dns-57d769cc4f-h7s7b" Nov 25 11:53:36 crc kubenswrapper[4706]: I1125 11:53:36.752670 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-h7s7b" Nov 25 11:53:36 crc kubenswrapper[4706]: I1125 11:53:36.973050 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zfbpp"] Nov 25 11:53:37 crc kubenswrapper[4706]: W1125 11:53:37.001366 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1f830dd_11b4_4ef5_bec1_796d0c51c8bb.slice/crio-997b3e4a9423d2495f6042b0acefc68df9a33827c62d123d4cf399a42e6dc366 WatchSource:0}: Error finding container 997b3e4a9423d2495f6042b0acefc68df9a33827c62d123d4cf399a42e6dc366: Status 404 returned error can't find the container with id 997b3e4a9423d2495f6042b0acefc68df9a33827c62d123d4cf399a42e6dc366 Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.171535 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.175201 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.182822 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.183040 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-q944t" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.183207 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.184346 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.185134 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.185536 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.185683 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.191089 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.243830 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zfbpp" event={"ID":"d1f830dd-11b4-4ef5-bec1-796d0c51c8bb","Type":"ContainerStarted","Data":"997b3e4a9423d2495f6042b0acefc68df9a33827c62d123d4cf399a42e6dc366"} Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.253485 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-h7s7b"] Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.288786 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ed6df424-6b86-44a1-8157-ca1f33167065-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " pod="openstack/rabbitmq-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.288870 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ed6df424-6b86-44a1-8157-ca1f33167065-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " pod="openstack/rabbitmq-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.288915 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ed6df424-6b86-44a1-8157-ca1f33167065-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " pod="openstack/rabbitmq-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.288984 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ed6df424-6b86-44a1-8157-ca1f33167065-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " pod="openstack/rabbitmq-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.289003 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ed6df424-6b86-44a1-8157-ca1f33167065-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " pod="openstack/rabbitmq-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.289057 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ed6df424-6b86-44a1-8157-ca1f33167065-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " pod="openstack/rabbitmq-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.289080 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ed6df424-6b86-44a1-8157-ca1f33167065-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " pod="openstack/rabbitmq-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.289133 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed6df424-6b86-44a1-8157-ca1f33167065-config-data\") pod \"rabbitmq-server-0\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " pod="openstack/rabbitmq-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.289233 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " pod="openstack/rabbitmq-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.289338 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcjq7\" (UniqueName: \"kubernetes.io/projected/ed6df424-6b86-44a1-8157-ca1f33167065-kube-api-access-pcjq7\") pod \"rabbitmq-server-0\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " pod="openstack/rabbitmq-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.289364 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ed6df424-6b86-44a1-8157-ca1f33167065-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " pod="openstack/rabbitmq-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.390849 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcjq7\" (UniqueName: \"kubernetes.io/projected/ed6df424-6b86-44a1-8157-ca1f33167065-kube-api-access-pcjq7\") pod \"rabbitmq-server-0\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " pod="openstack/rabbitmq-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.390916 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ed6df424-6b86-44a1-8157-ca1f33167065-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " pod="openstack/rabbitmq-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.390949 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ed6df424-6b86-44a1-8157-ca1f33167065-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " pod="openstack/rabbitmq-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.391000 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ed6df424-6b86-44a1-8157-ca1f33167065-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " pod="openstack/rabbitmq-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.391067 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ed6df424-6b86-44a1-8157-ca1f33167065-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " pod="openstack/rabbitmq-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.391104 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ed6df424-6b86-44a1-8157-ca1f33167065-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " pod="openstack/rabbitmq-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.391134 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ed6df424-6b86-44a1-8157-ca1f33167065-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " pod="openstack/rabbitmq-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.391175 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ed6df424-6b86-44a1-8157-ca1f33167065-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " pod="openstack/rabbitmq-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.391207 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ed6df424-6b86-44a1-8157-ca1f33167065-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " pod="openstack/rabbitmq-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.391257 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed6df424-6b86-44a1-8157-ca1f33167065-config-data\") pod \"rabbitmq-server-0\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " pod="openstack/rabbitmq-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.391326 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " pod="openstack/rabbitmq-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.391864 4706 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.391928 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ed6df424-6b86-44a1-8157-ca1f33167065-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " pod="openstack/rabbitmq-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.393745 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ed6df424-6b86-44a1-8157-ca1f33167065-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " pod="openstack/rabbitmq-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.394235 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed6df424-6b86-44a1-8157-ca1f33167065-config-data\") pod \"rabbitmq-server-0\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " pod="openstack/rabbitmq-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.395074 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ed6df424-6b86-44a1-8157-ca1f33167065-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " pod="openstack/rabbitmq-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.395099 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ed6df424-6b86-44a1-8157-ca1f33167065-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " pod="openstack/rabbitmq-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.397623 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ed6df424-6b86-44a1-8157-ca1f33167065-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " pod="openstack/rabbitmq-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.404899 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ed6df424-6b86-44a1-8157-ca1f33167065-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " pod="openstack/rabbitmq-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.405473 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ed6df424-6b86-44a1-8157-ca1f33167065-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " pod="openstack/rabbitmq-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.412435 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ed6df424-6b86-44a1-8157-ca1f33167065-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " pod="openstack/rabbitmq-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.414368 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " pod="openstack/rabbitmq-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.427311 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcjq7\" (UniqueName: \"kubernetes.io/projected/ed6df424-6b86-44a1-8157-ca1f33167065-kube-api-access-pcjq7\") pod \"rabbitmq-server-0\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " pod="openstack/rabbitmq-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.509371 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.523747 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.529099 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.535914 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.535939 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.536408 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.536464 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-b2nhx" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.536729 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.536740 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.536828 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.543975 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.594274 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.594350 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/557c84e6-ab5c-40c1-a3e1-68b513874f9b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.594381 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhwj9\" (UniqueName: \"kubernetes.io/projected/557c84e6-ab5c-40c1-a3e1-68b513874f9b-kube-api-access-zhwj9\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.594409 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/557c84e6-ab5c-40c1-a3e1-68b513874f9b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.594445 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/557c84e6-ab5c-40c1-a3e1-68b513874f9b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.594466 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/557c84e6-ab5c-40c1-a3e1-68b513874f9b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.594494 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/557c84e6-ab5c-40c1-a3e1-68b513874f9b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.594516 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/557c84e6-ab5c-40c1-a3e1-68b513874f9b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.594547 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/557c84e6-ab5c-40c1-a3e1-68b513874f9b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.594575 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/557c84e6-ab5c-40c1-a3e1-68b513874f9b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.594596 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/557c84e6-ab5c-40c1-a3e1-68b513874f9b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.696328 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/557c84e6-ab5c-40c1-a3e1-68b513874f9b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.696409 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/557c84e6-ab5c-40c1-a3e1-68b513874f9b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.696438 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/557c84e6-ab5c-40c1-a3e1-68b513874f9b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.696462 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/557c84e6-ab5c-40c1-a3e1-68b513874f9b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.696486 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.696506 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/557c84e6-ab5c-40c1-a3e1-68b513874f9b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.696537 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhwj9\" (UniqueName: \"kubernetes.io/projected/557c84e6-ab5c-40c1-a3e1-68b513874f9b-kube-api-access-zhwj9\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.696561 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/557c84e6-ab5c-40c1-a3e1-68b513874f9b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.696594 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/557c84e6-ab5c-40c1-a3e1-68b513874f9b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.696624 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/557c84e6-ab5c-40c1-a3e1-68b513874f9b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.696655 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/557c84e6-ab5c-40c1-a3e1-68b513874f9b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.697654 4706 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.698514 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/557c84e6-ab5c-40c1-a3e1-68b513874f9b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.698991 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/557c84e6-ab5c-40c1-a3e1-68b513874f9b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.699661 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/557c84e6-ab5c-40c1-a3e1-68b513874f9b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.699801 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/557c84e6-ab5c-40c1-a3e1-68b513874f9b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.700725 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/557c84e6-ab5c-40c1-a3e1-68b513874f9b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.703542 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/557c84e6-ab5c-40c1-a3e1-68b513874f9b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.705257 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/557c84e6-ab5c-40c1-a3e1-68b513874f9b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.705321 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/557c84e6-ab5c-40c1-a3e1-68b513874f9b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.709505 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/557c84e6-ab5c-40c1-a3e1-68b513874f9b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.720942 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhwj9\" (UniqueName: \"kubernetes.io/projected/557c84e6-ab5c-40c1-a3e1-68b513874f9b-kube-api-access-zhwj9\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.764442 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:53:37 crc kubenswrapper[4706]: I1125 11:53:37.861366 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:53:38 crc kubenswrapper[4706]: I1125 11:53:38.847228 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 25 11:53:38 crc kubenswrapper[4706]: I1125 11:53:38.848964 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 25 11:53:38 crc kubenswrapper[4706]: I1125 11:53:38.850980 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 25 11:53:38 crc kubenswrapper[4706]: I1125 11:53:38.851880 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 25 11:53:38 crc kubenswrapper[4706]: I1125 11:53:38.851880 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-5qcxg" Nov 25 11:53:38 crc kubenswrapper[4706]: I1125 11:53:38.852071 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 25 11:53:38 crc kubenswrapper[4706]: I1125 11:53:38.863942 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 25 11:53:38 crc kubenswrapper[4706]: I1125 11:53:38.866195 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 25 11:53:38 crc kubenswrapper[4706]: I1125 11:53:38.946444 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/64ca6766-8491-40bc-a14e-eb866edf3fe8-config-data-default\") pod \"openstack-galera-0\" (UID: \"64ca6766-8491-40bc-a14e-eb866edf3fe8\") " pod="openstack/openstack-galera-0" Nov 25 11:53:38 crc kubenswrapper[4706]: I1125 11:53:38.946537 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/64ca6766-8491-40bc-a14e-eb866edf3fe8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"64ca6766-8491-40bc-a14e-eb866edf3fe8\") " pod="openstack/openstack-galera-0" Nov 25 11:53:38 crc kubenswrapper[4706]: I1125 11:53:38.946704 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"64ca6766-8491-40bc-a14e-eb866edf3fe8\") " pod="openstack/openstack-galera-0" Nov 25 11:53:38 crc kubenswrapper[4706]: I1125 11:53:38.946796 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/64ca6766-8491-40bc-a14e-eb866edf3fe8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"64ca6766-8491-40bc-a14e-eb866edf3fe8\") " pod="openstack/openstack-galera-0" Nov 25 11:53:38 crc kubenswrapper[4706]: I1125 11:53:38.946841 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64ca6766-8491-40bc-a14e-eb866edf3fe8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"64ca6766-8491-40bc-a14e-eb866edf3fe8\") " pod="openstack/openstack-galera-0" Nov 25 11:53:38 crc kubenswrapper[4706]: I1125 11:53:38.946874 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/64ca6766-8491-40bc-a14e-eb866edf3fe8-kolla-config\") pod \"openstack-galera-0\" (UID: \"64ca6766-8491-40bc-a14e-eb866edf3fe8\") " pod="openstack/openstack-galera-0" Nov 25 11:53:38 crc kubenswrapper[4706]: I1125 11:53:38.946943 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64ca6766-8491-40bc-a14e-eb866edf3fe8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"64ca6766-8491-40bc-a14e-eb866edf3fe8\") " pod="openstack/openstack-galera-0" Nov 25 11:53:38 crc kubenswrapper[4706]: I1125 11:53:38.947186 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-294gx\" (UniqueName: \"kubernetes.io/projected/64ca6766-8491-40bc-a14e-eb866edf3fe8-kube-api-access-294gx\") pod \"openstack-galera-0\" (UID: \"64ca6766-8491-40bc-a14e-eb866edf3fe8\") " pod="openstack/openstack-galera-0" Nov 25 11:53:39 crc kubenswrapper[4706]: I1125 11:53:39.048644 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/64ca6766-8491-40bc-a14e-eb866edf3fe8-config-data-default\") pod \"openstack-galera-0\" (UID: \"64ca6766-8491-40bc-a14e-eb866edf3fe8\") " pod="openstack/openstack-galera-0" Nov 25 11:53:39 crc kubenswrapper[4706]: I1125 11:53:39.048742 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/64ca6766-8491-40bc-a14e-eb866edf3fe8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"64ca6766-8491-40bc-a14e-eb866edf3fe8\") " pod="openstack/openstack-galera-0" Nov 25 11:53:39 crc kubenswrapper[4706]: I1125 11:53:39.048791 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"64ca6766-8491-40bc-a14e-eb866edf3fe8\") " pod="openstack/openstack-galera-0" Nov 25 11:53:39 crc kubenswrapper[4706]: I1125 11:53:39.048821 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/64ca6766-8491-40bc-a14e-eb866edf3fe8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"64ca6766-8491-40bc-a14e-eb866edf3fe8\") " pod="openstack/openstack-galera-0" Nov 25 11:53:39 crc kubenswrapper[4706]: I1125 11:53:39.048861 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64ca6766-8491-40bc-a14e-eb866edf3fe8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"64ca6766-8491-40bc-a14e-eb866edf3fe8\") " pod="openstack/openstack-galera-0" Nov 25 11:53:39 crc kubenswrapper[4706]: I1125 11:53:39.048913 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/64ca6766-8491-40bc-a14e-eb866edf3fe8-kolla-config\") pod \"openstack-galera-0\" (UID: \"64ca6766-8491-40bc-a14e-eb866edf3fe8\") " pod="openstack/openstack-galera-0" Nov 25 11:53:39 crc kubenswrapper[4706]: I1125 11:53:39.048955 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64ca6766-8491-40bc-a14e-eb866edf3fe8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"64ca6766-8491-40bc-a14e-eb866edf3fe8\") " pod="openstack/openstack-galera-0" Nov 25 11:53:39 crc kubenswrapper[4706]: I1125 11:53:39.049029 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-294gx\" (UniqueName: \"kubernetes.io/projected/64ca6766-8491-40bc-a14e-eb866edf3fe8-kube-api-access-294gx\") pod \"openstack-galera-0\" (UID: \"64ca6766-8491-40bc-a14e-eb866edf3fe8\") " pod="openstack/openstack-galera-0" Nov 25 11:53:39 crc kubenswrapper[4706]: I1125 11:53:39.049387 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/64ca6766-8491-40bc-a14e-eb866edf3fe8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"64ca6766-8491-40bc-a14e-eb866edf3fe8\") " pod="openstack/openstack-galera-0" Nov 25 11:53:39 crc kubenswrapper[4706]: I1125 11:53:39.049617 4706 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"64ca6766-8491-40bc-a14e-eb866edf3fe8\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-galera-0" Nov 25 11:53:39 crc kubenswrapper[4706]: I1125 11:53:39.049929 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/64ca6766-8491-40bc-a14e-eb866edf3fe8-kolla-config\") pod \"openstack-galera-0\" (UID: \"64ca6766-8491-40bc-a14e-eb866edf3fe8\") " pod="openstack/openstack-galera-0" Nov 25 11:53:39 crc kubenswrapper[4706]: I1125 11:53:39.049962 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/64ca6766-8491-40bc-a14e-eb866edf3fe8-config-data-default\") pod \"openstack-galera-0\" (UID: \"64ca6766-8491-40bc-a14e-eb866edf3fe8\") " pod="openstack/openstack-galera-0" Nov 25 11:53:39 crc kubenswrapper[4706]: I1125 11:53:39.052692 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64ca6766-8491-40bc-a14e-eb866edf3fe8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"64ca6766-8491-40bc-a14e-eb866edf3fe8\") " pod="openstack/openstack-galera-0" Nov 25 11:53:39 crc kubenswrapper[4706]: I1125 11:53:39.064710 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64ca6766-8491-40bc-a14e-eb866edf3fe8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"64ca6766-8491-40bc-a14e-eb866edf3fe8\") " pod="openstack/openstack-galera-0" Nov 25 11:53:39 crc kubenswrapper[4706]: I1125 11:53:39.068078 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-294gx\" (UniqueName: \"kubernetes.io/projected/64ca6766-8491-40bc-a14e-eb866edf3fe8-kube-api-access-294gx\") pod \"openstack-galera-0\" (UID: \"64ca6766-8491-40bc-a14e-eb866edf3fe8\") " pod="openstack/openstack-galera-0" Nov 25 11:53:39 crc kubenswrapper[4706]: I1125 11:53:39.068811 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/64ca6766-8491-40bc-a14e-eb866edf3fe8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"64ca6766-8491-40bc-a14e-eb866edf3fe8\") " pod="openstack/openstack-galera-0" Nov 25 11:53:39 crc kubenswrapper[4706]: I1125 11:53:39.069261 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"64ca6766-8491-40bc-a14e-eb866edf3fe8\") " pod="openstack/openstack-galera-0" Nov 25 11:53:39 crc kubenswrapper[4706]: I1125 11:53:39.188141 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.332135 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.338252 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.341244 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-fcl7v" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.341285 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.341442 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.341466 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.352780 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.486054 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmwh6\" (UniqueName: \"kubernetes.io/projected/49e77cd2-5940-4ae6-9418-d069ce012ad7-kube-api-access-fmwh6\") pod \"openstack-cell1-galera-0\" (UID: \"49e77cd2-5940-4ae6-9418-d069ce012ad7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.486114 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/49e77cd2-5940-4ae6-9418-d069ce012ad7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"49e77cd2-5940-4ae6-9418-d069ce012ad7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.486154 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"49e77cd2-5940-4ae6-9418-d069ce012ad7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.486446 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/49e77cd2-5940-4ae6-9418-d069ce012ad7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"49e77cd2-5940-4ae6-9418-d069ce012ad7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.486476 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/49e77cd2-5940-4ae6-9418-d069ce012ad7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"49e77cd2-5940-4ae6-9418-d069ce012ad7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.486507 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49e77cd2-5940-4ae6-9418-d069ce012ad7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"49e77cd2-5940-4ae6-9418-d069ce012ad7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.486551 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49e77cd2-5940-4ae6-9418-d069ce012ad7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"49e77cd2-5940-4ae6-9418-d069ce012ad7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.486589 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/49e77cd2-5940-4ae6-9418-d069ce012ad7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"49e77cd2-5940-4ae6-9418-d069ce012ad7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.584420 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.585561 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.588585 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/49e77cd2-5940-4ae6-9418-d069ce012ad7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"49e77cd2-5940-4ae6-9418-d069ce012ad7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.588629 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"49e77cd2-5940-4ae6-9418-d069ce012ad7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.588684 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/49e77cd2-5940-4ae6-9418-d069ce012ad7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"49e77cd2-5940-4ae6-9418-d069ce012ad7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.588704 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/49e77cd2-5940-4ae6-9418-d069ce012ad7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"49e77cd2-5940-4ae6-9418-d069ce012ad7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.588728 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49e77cd2-5940-4ae6-9418-d069ce012ad7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"49e77cd2-5940-4ae6-9418-d069ce012ad7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.588762 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49e77cd2-5940-4ae6-9418-d069ce012ad7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"49e77cd2-5940-4ae6-9418-d069ce012ad7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.588798 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/49e77cd2-5940-4ae6-9418-d069ce012ad7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"49e77cd2-5940-4ae6-9418-d069ce012ad7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.588840 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmwh6\" (UniqueName: \"kubernetes.io/projected/49e77cd2-5940-4ae6-9418-d069ce012ad7-kube-api-access-fmwh6\") pod \"openstack-cell1-galera-0\" (UID: \"49e77cd2-5940-4ae6-9418-d069ce012ad7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.589940 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/49e77cd2-5940-4ae6-9418-d069ce012ad7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"49e77cd2-5940-4ae6-9418-d069ce012ad7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.590201 4706 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"49e77cd2-5940-4ae6-9418-d069ce012ad7\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-cell1-galera-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.595251 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.595575 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.601906 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.602105 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-qnhsx" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.605820 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49e77cd2-5940-4ae6-9418-d069ce012ad7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"49e77cd2-5940-4ae6-9418-d069ce012ad7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.606383 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/49e77cd2-5940-4ae6-9418-d069ce012ad7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"49e77cd2-5940-4ae6-9418-d069ce012ad7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.606971 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/49e77cd2-5940-4ae6-9418-d069ce012ad7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"49e77cd2-5940-4ae6-9418-d069ce012ad7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.610147 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49e77cd2-5940-4ae6-9418-d069ce012ad7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"49e77cd2-5940-4ae6-9418-d069ce012ad7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.610628 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/49e77cd2-5940-4ae6-9418-d069ce012ad7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"49e77cd2-5940-4ae6-9418-d069ce012ad7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.615500 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"49e77cd2-5940-4ae6-9418-d069ce012ad7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.616318 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmwh6\" (UniqueName: \"kubernetes.io/projected/49e77cd2-5940-4ae6-9418-d069ce012ad7-kube-api-access-fmwh6\") pod \"openstack-cell1-galera-0\" (UID: \"49e77cd2-5940-4ae6-9418-d069ce012ad7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.667858 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.690588 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/37118d82-a55d-4a10-8b2c-6e5cf036474c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"37118d82-a55d-4a10-8b2c-6e5cf036474c\") " pod="openstack/memcached-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.690637 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8m26\" (UniqueName: \"kubernetes.io/projected/37118d82-a55d-4a10-8b2c-6e5cf036474c-kube-api-access-g8m26\") pod \"memcached-0\" (UID: \"37118d82-a55d-4a10-8b2c-6e5cf036474c\") " pod="openstack/memcached-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.690654 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/37118d82-a55d-4a10-8b2c-6e5cf036474c-kolla-config\") pod \"memcached-0\" (UID: \"37118d82-a55d-4a10-8b2c-6e5cf036474c\") " pod="openstack/memcached-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.690830 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37118d82-a55d-4a10-8b2c-6e5cf036474c-config-data\") pod \"memcached-0\" (UID: \"37118d82-a55d-4a10-8b2c-6e5cf036474c\") " pod="openstack/memcached-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.690898 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37118d82-a55d-4a10-8b2c-6e5cf036474c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"37118d82-a55d-4a10-8b2c-6e5cf036474c\") " pod="openstack/memcached-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.794471 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37118d82-a55d-4a10-8b2c-6e5cf036474c-config-data\") pod \"memcached-0\" (UID: \"37118d82-a55d-4a10-8b2c-6e5cf036474c\") " pod="openstack/memcached-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.794533 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37118d82-a55d-4a10-8b2c-6e5cf036474c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"37118d82-a55d-4a10-8b2c-6e5cf036474c\") " pod="openstack/memcached-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.794605 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/37118d82-a55d-4a10-8b2c-6e5cf036474c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"37118d82-a55d-4a10-8b2c-6e5cf036474c\") " pod="openstack/memcached-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.794625 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8m26\" (UniqueName: \"kubernetes.io/projected/37118d82-a55d-4a10-8b2c-6e5cf036474c-kube-api-access-g8m26\") pod \"memcached-0\" (UID: \"37118d82-a55d-4a10-8b2c-6e5cf036474c\") " pod="openstack/memcached-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.794643 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/37118d82-a55d-4a10-8b2c-6e5cf036474c-kolla-config\") pod \"memcached-0\" (UID: \"37118d82-a55d-4a10-8b2c-6e5cf036474c\") " pod="openstack/memcached-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.795350 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/37118d82-a55d-4a10-8b2c-6e5cf036474c-kolla-config\") pod \"memcached-0\" (UID: \"37118d82-a55d-4a10-8b2c-6e5cf036474c\") " pod="openstack/memcached-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.795833 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37118d82-a55d-4a10-8b2c-6e5cf036474c-config-data\") pod \"memcached-0\" (UID: \"37118d82-a55d-4a10-8b2c-6e5cf036474c\") " pod="openstack/memcached-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.809314 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/37118d82-a55d-4a10-8b2c-6e5cf036474c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"37118d82-a55d-4a10-8b2c-6e5cf036474c\") " pod="openstack/memcached-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.817869 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37118d82-a55d-4a10-8b2c-6e5cf036474c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"37118d82-a55d-4a10-8b2c-6e5cf036474c\") " pod="openstack/memcached-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.833908 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8m26\" (UniqueName: \"kubernetes.io/projected/37118d82-a55d-4a10-8b2c-6e5cf036474c-kube-api-access-g8m26\") pod \"memcached-0\" (UID: \"37118d82-a55d-4a10-8b2c-6e5cf036474c\") " pod="openstack/memcached-0" Nov 25 11:53:40 crc kubenswrapper[4706]: I1125 11:53:40.957542 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 25 11:53:41 crc kubenswrapper[4706]: W1125 11:53:41.169592 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a9e827f_acb8_4b85_90f6_c5cd8634f430.slice/crio-a623898070181ffd5e670c3c6ef8362ec3559af24564dc55c573fdf4f3bdd0ae WatchSource:0}: Error finding container a623898070181ffd5e670c3c6ef8362ec3559af24564dc55c573fdf4f3bdd0ae: Status 404 returned error can't find the container with id a623898070181ffd5e670c3c6ef8362ec3559af24564dc55c573fdf4f3bdd0ae Nov 25 11:53:41 crc kubenswrapper[4706]: I1125 11:53:41.284522 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-h7s7b" event={"ID":"9a9e827f-acb8-4b85-90f6-c5cd8634f430","Type":"ContainerStarted","Data":"a623898070181ffd5e670c3c6ef8362ec3559af24564dc55c573fdf4f3bdd0ae"} Nov 25 11:53:42 crc kubenswrapper[4706]: I1125 11:53:42.738839 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 11:53:42 crc kubenswrapper[4706]: I1125 11:53:42.740471 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 11:53:42 crc kubenswrapper[4706]: I1125 11:53:42.742618 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-ktrdc" Nov 25 11:53:42 crc kubenswrapper[4706]: I1125 11:53:42.757156 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 11:53:42 crc kubenswrapper[4706]: I1125 11:53:42.822209 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znrhd\" (UniqueName: \"kubernetes.io/projected/36bf3efe-847b-4896-878f-1f06e582bf01-kube-api-access-znrhd\") pod \"kube-state-metrics-0\" (UID: \"36bf3efe-847b-4896-878f-1f06e582bf01\") " pod="openstack/kube-state-metrics-0" Nov 25 11:53:42 crc kubenswrapper[4706]: I1125 11:53:42.924522 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znrhd\" (UniqueName: \"kubernetes.io/projected/36bf3efe-847b-4896-878f-1f06e582bf01-kube-api-access-znrhd\") pod \"kube-state-metrics-0\" (UID: \"36bf3efe-847b-4896-878f-1f06e582bf01\") " pod="openstack/kube-state-metrics-0" Nov 25 11:53:42 crc kubenswrapper[4706]: I1125 11:53:42.955906 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znrhd\" (UniqueName: \"kubernetes.io/projected/36bf3efe-847b-4896-878f-1f06e582bf01-kube-api-access-znrhd\") pod \"kube-state-metrics-0\" (UID: \"36bf3efe-847b-4896-878f-1f06e582bf01\") " pod="openstack/kube-state-metrics-0" Nov 25 11:53:43 crc kubenswrapper[4706]: I1125 11:53:43.062792 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.092060 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-kd65v"] Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.093645 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kd65v" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.096421 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.097393 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.097685 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-nf5qj" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.104654 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-q8rmg"] Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.107389 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-q8rmg" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.109568 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kd65v"] Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.120217 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-q8rmg"] Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.179188 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a2035192-0066-4761-b5a8-2684c95f20ff-var-lib\") pod \"ovn-controller-ovs-q8rmg\" (UID: \"a2035192-0066-4761-b5a8-2684c95f20ff\") " pod="openstack/ovn-controller-ovs-q8rmg" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.179247 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a2035192-0066-4761-b5a8-2684c95f20ff-var-log\") pod \"ovn-controller-ovs-q8rmg\" (UID: \"a2035192-0066-4761-b5a8-2684c95f20ff\") " pod="openstack/ovn-controller-ovs-q8rmg" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.179287 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/23b72526-ef77-4128-a880-6df46f5db440-var-log-ovn\") pod \"ovn-controller-kd65v\" (UID: \"23b72526-ef77-4128-a880-6df46f5db440\") " pod="openstack/ovn-controller-kd65v" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.179500 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/23b72526-ef77-4128-a880-6df46f5db440-var-run-ovn\") pod \"ovn-controller-kd65v\" (UID: \"23b72526-ef77-4128-a880-6df46f5db440\") " pod="openstack/ovn-controller-kd65v" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.179546 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz6nj\" (UniqueName: \"kubernetes.io/projected/23b72526-ef77-4128-a880-6df46f5db440-kube-api-access-wz6nj\") pod \"ovn-controller-kd65v\" (UID: \"23b72526-ef77-4128-a880-6df46f5db440\") " pod="openstack/ovn-controller-kd65v" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.179652 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/23b72526-ef77-4128-a880-6df46f5db440-var-run\") pod \"ovn-controller-kd65v\" (UID: \"23b72526-ef77-4128-a880-6df46f5db440\") " pod="openstack/ovn-controller-kd65v" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.179728 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23b72526-ef77-4128-a880-6df46f5db440-scripts\") pod \"ovn-controller-kd65v\" (UID: \"23b72526-ef77-4128-a880-6df46f5db440\") " pod="openstack/ovn-controller-kd65v" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.179756 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a2035192-0066-4761-b5a8-2684c95f20ff-etc-ovs\") pod \"ovn-controller-ovs-q8rmg\" (UID: \"a2035192-0066-4761-b5a8-2684c95f20ff\") " pod="openstack/ovn-controller-ovs-q8rmg" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.179778 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/23b72526-ef77-4128-a880-6df46f5db440-ovn-controller-tls-certs\") pod \"ovn-controller-kd65v\" (UID: \"23b72526-ef77-4128-a880-6df46f5db440\") " pod="openstack/ovn-controller-kd65v" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.179981 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a2035192-0066-4761-b5a8-2684c95f20ff-var-run\") pod \"ovn-controller-ovs-q8rmg\" (UID: \"a2035192-0066-4761-b5a8-2684c95f20ff\") " pod="openstack/ovn-controller-ovs-q8rmg" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.180044 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2035192-0066-4761-b5a8-2684c95f20ff-scripts\") pod \"ovn-controller-ovs-q8rmg\" (UID: \"a2035192-0066-4761-b5a8-2684c95f20ff\") " pod="openstack/ovn-controller-ovs-q8rmg" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.180090 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23b72526-ef77-4128-a880-6df46f5db440-combined-ca-bundle\") pod \"ovn-controller-kd65v\" (UID: \"23b72526-ef77-4128-a880-6df46f5db440\") " pod="openstack/ovn-controller-kd65v" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.180129 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxkft\" (UniqueName: \"kubernetes.io/projected/a2035192-0066-4761-b5a8-2684c95f20ff-kube-api-access-zxkft\") pod \"ovn-controller-ovs-q8rmg\" (UID: \"a2035192-0066-4761-b5a8-2684c95f20ff\") " pod="openstack/ovn-controller-ovs-q8rmg" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.281838 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a2035192-0066-4761-b5a8-2684c95f20ff-var-run\") pod \"ovn-controller-ovs-q8rmg\" (UID: \"a2035192-0066-4761-b5a8-2684c95f20ff\") " pod="openstack/ovn-controller-ovs-q8rmg" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.281905 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2035192-0066-4761-b5a8-2684c95f20ff-scripts\") pod \"ovn-controller-ovs-q8rmg\" (UID: \"a2035192-0066-4761-b5a8-2684c95f20ff\") " pod="openstack/ovn-controller-ovs-q8rmg" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.281946 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23b72526-ef77-4128-a880-6df46f5db440-combined-ca-bundle\") pod \"ovn-controller-kd65v\" (UID: \"23b72526-ef77-4128-a880-6df46f5db440\") " pod="openstack/ovn-controller-kd65v" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.281983 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxkft\" (UniqueName: \"kubernetes.io/projected/a2035192-0066-4761-b5a8-2684c95f20ff-kube-api-access-zxkft\") pod \"ovn-controller-ovs-q8rmg\" (UID: \"a2035192-0066-4761-b5a8-2684c95f20ff\") " pod="openstack/ovn-controller-ovs-q8rmg" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.282059 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a2035192-0066-4761-b5a8-2684c95f20ff-var-lib\") pod \"ovn-controller-ovs-q8rmg\" (UID: \"a2035192-0066-4761-b5a8-2684c95f20ff\") " pod="openstack/ovn-controller-ovs-q8rmg" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.282092 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a2035192-0066-4761-b5a8-2684c95f20ff-var-log\") pod \"ovn-controller-ovs-q8rmg\" (UID: \"a2035192-0066-4761-b5a8-2684c95f20ff\") " pod="openstack/ovn-controller-ovs-q8rmg" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.282125 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/23b72526-ef77-4128-a880-6df46f5db440-var-log-ovn\") pod \"ovn-controller-kd65v\" (UID: \"23b72526-ef77-4128-a880-6df46f5db440\") " pod="openstack/ovn-controller-kd65v" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.282159 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/23b72526-ef77-4128-a880-6df46f5db440-var-run-ovn\") pod \"ovn-controller-kd65v\" (UID: \"23b72526-ef77-4128-a880-6df46f5db440\") " pod="openstack/ovn-controller-kd65v" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.282178 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz6nj\" (UniqueName: \"kubernetes.io/projected/23b72526-ef77-4128-a880-6df46f5db440-kube-api-access-wz6nj\") pod \"ovn-controller-kd65v\" (UID: \"23b72526-ef77-4128-a880-6df46f5db440\") " pod="openstack/ovn-controller-kd65v" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.282216 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/23b72526-ef77-4128-a880-6df46f5db440-var-run\") pod \"ovn-controller-kd65v\" (UID: \"23b72526-ef77-4128-a880-6df46f5db440\") " pod="openstack/ovn-controller-kd65v" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.282251 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23b72526-ef77-4128-a880-6df46f5db440-scripts\") pod \"ovn-controller-kd65v\" (UID: \"23b72526-ef77-4128-a880-6df46f5db440\") " pod="openstack/ovn-controller-kd65v" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.282270 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a2035192-0066-4761-b5a8-2684c95f20ff-etc-ovs\") pod \"ovn-controller-ovs-q8rmg\" (UID: \"a2035192-0066-4761-b5a8-2684c95f20ff\") " pod="openstack/ovn-controller-ovs-q8rmg" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.282293 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/23b72526-ef77-4128-a880-6df46f5db440-ovn-controller-tls-certs\") pod \"ovn-controller-kd65v\" (UID: \"23b72526-ef77-4128-a880-6df46f5db440\") " pod="openstack/ovn-controller-kd65v" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.284197 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/23b72526-ef77-4128-a880-6df46f5db440-var-log-ovn\") pod \"ovn-controller-kd65v\" (UID: \"23b72526-ef77-4128-a880-6df46f5db440\") " pod="openstack/ovn-controller-kd65v" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.284220 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a2035192-0066-4761-b5a8-2684c95f20ff-var-run\") pod \"ovn-controller-ovs-q8rmg\" (UID: \"a2035192-0066-4761-b5a8-2684c95f20ff\") " pod="openstack/ovn-controller-ovs-q8rmg" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.284473 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a2035192-0066-4761-b5a8-2684c95f20ff-var-lib\") pod \"ovn-controller-ovs-q8rmg\" (UID: \"a2035192-0066-4761-b5a8-2684c95f20ff\") " pod="openstack/ovn-controller-ovs-q8rmg" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.284961 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a2035192-0066-4761-b5a8-2684c95f20ff-var-log\") pod \"ovn-controller-ovs-q8rmg\" (UID: \"a2035192-0066-4761-b5a8-2684c95f20ff\") " pod="openstack/ovn-controller-ovs-q8rmg" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.284997 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2035192-0066-4761-b5a8-2684c95f20ff-scripts\") pod \"ovn-controller-ovs-q8rmg\" (UID: \"a2035192-0066-4761-b5a8-2684c95f20ff\") " pod="openstack/ovn-controller-ovs-q8rmg" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.285019 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/23b72526-ef77-4128-a880-6df46f5db440-var-run\") pod \"ovn-controller-kd65v\" (UID: \"23b72526-ef77-4128-a880-6df46f5db440\") " pod="openstack/ovn-controller-kd65v" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.285126 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/23b72526-ef77-4128-a880-6df46f5db440-var-run-ovn\") pod \"ovn-controller-kd65v\" (UID: \"23b72526-ef77-4128-a880-6df46f5db440\") " pod="openstack/ovn-controller-kd65v" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.286146 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a2035192-0066-4761-b5a8-2684c95f20ff-etc-ovs\") pod \"ovn-controller-ovs-q8rmg\" (UID: \"a2035192-0066-4761-b5a8-2684c95f20ff\") " pod="openstack/ovn-controller-ovs-q8rmg" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.287153 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23b72526-ef77-4128-a880-6df46f5db440-scripts\") pod \"ovn-controller-kd65v\" (UID: \"23b72526-ef77-4128-a880-6df46f5db440\") " pod="openstack/ovn-controller-kd65v" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.293842 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/23b72526-ef77-4128-a880-6df46f5db440-ovn-controller-tls-certs\") pod \"ovn-controller-kd65v\" (UID: \"23b72526-ef77-4128-a880-6df46f5db440\") " pod="openstack/ovn-controller-kd65v" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.294347 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23b72526-ef77-4128-a880-6df46f5db440-combined-ca-bundle\") pod \"ovn-controller-kd65v\" (UID: \"23b72526-ef77-4128-a880-6df46f5db440\") " pod="openstack/ovn-controller-kd65v" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.305182 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxkft\" (UniqueName: \"kubernetes.io/projected/a2035192-0066-4761-b5a8-2684c95f20ff-kube-api-access-zxkft\") pod \"ovn-controller-ovs-q8rmg\" (UID: \"a2035192-0066-4761-b5a8-2684c95f20ff\") " pod="openstack/ovn-controller-ovs-q8rmg" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.314392 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz6nj\" (UniqueName: \"kubernetes.io/projected/23b72526-ef77-4128-a880-6df46f5db440-kube-api-access-wz6nj\") pod \"ovn-controller-kd65v\" (UID: \"23b72526-ef77-4128-a880-6df46f5db440\") " pod="openstack/ovn-controller-kd65v" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.415787 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kd65v" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.432318 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-q8rmg" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.491021 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.496135 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.498801 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.500575 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.501082 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.501252 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.503372 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-bg9qd" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.509691 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.688056 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"752cf7db-684f-4a5a-8a03-717e69810056\") " pod="openstack/ovsdbserver-sb-0" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.688100 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/752cf7db-684f-4a5a-8a03-717e69810056-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"752cf7db-684f-4a5a-8a03-717e69810056\") " pod="openstack/ovsdbserver-sb-0" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.688137 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/752cf7db-684f-4a5a-8a03-717e69810056-config\") pod \"ovsdbserver-sb-0\" (UID: \"752cf7db-684f-4a5a-8a03-717e69810056\") " pod="openstack/ovsdbserver-sb-0" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.688153 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/752cf7db-684f-4a5a-8a03-717e69810056-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"752cf7db-684f-4a5a-8a03-717e69810056\") " pod="openstack/ovsdbserver-sb-0" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.688182 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/752cf7db-684f-4a5a-8a03-717e69810056-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"752cf7db-684f-4a5a-8a03-717e69810056\") " pod="openstack/ovsdbserver-sb-0" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.688205 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz87n\" (UniqueName: \"kubernetes.io/projected/752cf7db-684f-4a5a-8a03-717e69810056-kube-api-access-dz87n\") pod \"ovsdbserver-sb-0\" (UID: \"752cf7db-684f-4a5a-8a03-717e69810056\") " pod="openstack/ovsdbserver-sb-0" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.688225 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/752cf7db-684f-4a5a-8a03-717e69810056-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"752cf7db-684f-4a5a-8a03-717e69810056\") " pod="openstack/ovsdbserver-sb-0" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.688244 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/752cf7db-684f-4a5a-8a03-717e69810056-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"752cf7db-684f-4a5a-8a03-717e69810056\") " pod="openstack/ovsdbserver-sb-0" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.789856 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"752cf7db-684f-4a5a-8a03-717e69810056\") " pod="openstack/ovsdbserver-sb-0" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.790411 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/752cf7db-684f-4a5a-8a03-717e69810056-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"752cf7db-684f-4a5a-8a03-717e69810056\") " pod="openstack/ovsdbserver-sb-0" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.790243 4706 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"752cf7db-684f-4a5a-8a03-717e69810056\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-sb-0" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.790503 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/752cf7db-684f-4a5a-8a03-717e69810056-config\") pod \"ovsdbserver-sb-0\" (UID: \"752cf7db-684f-4a5a-8a03-717e69810056\") " pod="openstack/ovsdbserver-sb-0" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.790561 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/752cf7db-684f-4a5a-8a03-717e69810056-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"752cf7db-684f-4a5a-8a03-717e69810056\") " pod="openstack/ovsdbserver-sb-0" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.790608 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/752cf7db-684f-4a5a-8a03-717e69810056-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"752cf7db-684f-4a5a-8a03-717e69810056\") " pod="openstack/ovsdbserver-sb-0" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.790644 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz87n\" (UniqueName: \"kubernetes.io/projected/752cf7db-684f-4a5a-8a03-717e69810056-kube-api-access-dz87n\") pod \"ovsdbserver-sb-0\" (UID: \"752cf7db-684f-4a5a-8a03-717e69810056\") " pod="openstack/ovsdbserver-sb-0" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.790680 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/752cf7db-684f-4a5a-8a03-717e69810056-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"752cf7db-684f-4a5a-8a03-717e69810056\") " pod="openstack/ovsdbserver-sb-0" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.790744 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/752cf7db-684f-4a5a-8a03-717e69810056-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"752cf7db-684f-4a5a-8a03-717e69810056\") " pod="openstack/ovsdbserver-sb-0" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.791417 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/752cf7db-684f-4a5a-8a03-717e69810056-config\") pod \"ovsdbserver-sb-0\" (UID: \"752cf7db-684f-4a5a-8a03-717e69810056\") " pod="openstack/ovsdbserver-sb-0" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.791432 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/752cf7db-684f-4a5a-8a03-717e69810056-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"752cf7db-684f-4a5a-8a03-717e69810056\") " pod="openstack/ovsdbserver-sb-0" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.792763 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/752cf7db-684f-4a5a-8a03-717e69810056-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"752cf7db-684f-4a5a-8a03-717e69810056\") " pod="openstack/ovsdbserver-sb-0" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.796332 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/752cf7db-684f-4a5a-8a03-717e69810056-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"752cf7db-684f-4a5a-8a03-717e69810056\") " pod="openstack/ovsdbserver-sb-0" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.796375 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/752cf7db-684f-4a5a-8a03-717e69810056-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"752cf7db-684f-4a5a-8a03-717e69810056\") " pod="openstack/ovsdbserver-sb-0" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.798407 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/752cf7db-684f-4a5a-8a03-717e69810056-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"752cf7db-684f-4a5a-8a03-717e69810056\") " pod="openstack/ovsdbserver-sb-0" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.810970 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"752cf7db-684f-4a5a-8a03-717e69810056\") " pod="openstack/ovsdbserver-sb-0" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.814595 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz87n\" (UniqueName: \"kubernetes.io/projected/752cf7db-684f-4a5a-8a03-717e69810056-kube-api-access-dz87n\") pod \"ovsdbserver-sb-0\" (UID: \"752cf7db-684f-4a5a-8a03-717e69810056\") " pod="openstack/ovsdbserver-sb-0" Nov 25 11:53:46 crc kubenswrapper[4706]: I1125 11:53:46.817752 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 25 11:53:49 crc kubenswrapper[4706]: I1125 11:53:49.898179 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 25 11:53:49 crc kubenswrapper[4706]: I1125 11:53:49.900253 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 25 11:53:49 crc kubenswrapper[4706]: I1125 11:53:49.906004 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-bnpw5" Nov 25 11:53:49 crc kubenswrapper[4706]: I1125 11:53:49.906284 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 25 11:53:49 crc kubenswrapper[4706]: I1125 11:53:49.906805 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 25 11:53:49 crc kubenswrapper[4706]: I1125 11:53:49.909823 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 25 11:53:49 crc kubenswrapper[4706]: I1125 11:53:49.914313 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 25 11:53:49 crc kubenswrapper[4706]: I1125 11:53:49.961782 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz2pj\" (UniqueName: \"kubernetes.io/projected/3c49be9b-0e12-4db2-82be-3415441f57d4-kube-api-access-rz2pj\") pod \"ovsdbserver-nb-0\" (UID: \"3c49be9b-0e12-4db2-82be-3415441f57d4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 11:53:49 crc kubenswrapper[4706]: I1125 11:53:49.962015 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3c49be9b-0e12-4db2-82be-3415441f57d4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3c49be9b-0e12-4db2-82be-3415441f57d4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 11:53:49 crc kubenswrapper[4706]: I1125 11:53:49.962134 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c49be9b-0e12-4db2-82be-3415441f57d4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3c49be9b-0e12-4db2-82be-3415441f57d4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 11:53:49 crc kubenswrapper[4706]: I1125 11:53:49.962220 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3c49be9b-0e12-4db2-82be-3415441f57d4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 11:53:49 crc kubenswrapper[4706]: I1125 11:53:49.962372 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c49be9b-0e12-4db2-82be-3415441f57d4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3c49be9b-0e12-4db2-82be-3415441f57d4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 11:53:49 crc kubenswrapper[4706]: I1125 11:53:49.962477 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c49be9b-0e12-4db2-82be-3415441f57d4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3c49be9b-0e12-4db2-82be-3415441f57d4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 11:53:49 crc kubenswrapper[4706]: I1125 11:53:49.962573 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3c49be9b-0e12-4db2-82be-3415441f57d4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3c49be9b-0e12-4db2-82be-3415441f57d4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 11:53:49 crc kubenswrapper[4706]: I1125 11:53:49.962705 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c49be9b-0e12-4db2-82be-3415441f57d4-config\") pod \"ovsdbserver-nb-0\" (UID: \"3c49be9b-0e12-4db2-82be-3415441f57d4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 11:53:50 crc kubenswrapper[4706]: I1125 11:53:50.065483 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz2pj\" (UniqueName: \"kubernetes.io/projected/3c49be9b-0e12-4db2-82be-3415441f57d4-kube-api-access-rz2pj\") pod \"ovsdbserver-nb-0\" (UID: \"3c49be9b-0e12-4db2-82be-3415441f57d4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 11:53:50 crc kubenswrapper[4706]: I1125 11:53:50.065541 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3c49be9b-0e12-4db2-82be-3415441f57d4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3c49be9b-0e12-4db2-82be-3415441f57d4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 11:53:50 crc kubenswrapper[4706]: I1125 11:53:50.065570 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c49be9b-0e12-4db2-82be-3415441f57d4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3c49be9b-0e12-4db2-82be-3415441f57d4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 11:53:50 crc kubenswrapper[4706]: I1125 11:53:50.065617 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3c49be9b-0e12-4db2-82be-3415441f57d4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 11:53:50 crc kubenswrapper[4706]: I1125 11:53:50.065745 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c49be9b-0e12-4db2-82be-3415441f57d4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3c49be9b-0e12-4db2-82be-3415441f57d4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 11:53:50 crc kubenswrapper[4706]: I1125 11:53:50.065788 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c49be9b-0e12-4db2-82be-3415441f57d4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3c49be9b-0e12-4db2-82be-3415441f57d4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 11:53:50 crc kubenswrapper[4706]: I1125 11:53:50.065830 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3c49be9b-0e12-4db2-82be-3415441f57d4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3c49be9b-0e12-4db2-82be-3415441f57d4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 11:53:50 crc kubenswrapper[4706]: I1125 11:53:50.065852 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c49be9b-0e12-4db2-82be-3415441f57d4-config\") pod \"ovsdbserver-nb-0\" (UID: \"3c49be9b-0e12-4db2-82be-3415441f57d4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 11:53:50 crc kubenswrapper[4706]: I1125 11:53:50.066647 4706 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3c49be9b-0e12-4db2-82be-3415441f57d4\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Nov 25 11:53:50 crc kubenswrapper[4706]: I1125 11:53:50.066764 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3c49be9b-0e12-4db2-82be-3415441f57d4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3c49be9b-0e12-4db2-82be-3415441f57d4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 11:53:50 crc kubenswrapper[4706]: I1125 11:53:50.067879 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3c49be9b-0e12-4db2-82be-3415441f57d4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3c49be9b-0e12-4db2-82be-3415441f57d4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 11:53:50 crc kubenswrapper[4706]: I1125 11:53:50.068901 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c49be9b-0e12-4db2-82be-3415441f57d4-config\") pod \"ovsdbserver-nb-0\" (UID: \"3c49be9b-0e12-4db2-82be-3415441f57d4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 11:53:50 crc kubenswrapper[4706]: I1125 11:53:50.074091 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c49be9b-0e12-4db2-82be-3415441f57d4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3c49be9b-0e12-4db2-82be-3415441f57d4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 11:53:50 crc kubenswrapper[4706]: I1125 11:53:50.074384 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c49be9b-0e12-4db2-82be-3415441f57d4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3c49be9b-0e12-4db2-82be-3415441f57d4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 11:53:50 crc kubenswrapper[4706]: I1125 11:53:50.091438 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3c49be9b-0e12-4db2-82be-3415441f57d4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 11:53:50 crc kubenswrapper[4706]: I1125 11:53:50.094035 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c49be9b-0e12-4db2-82be-3415441f57d4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3c49be9b-0e12-4db2-82be-3415441f57d4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 11:53:50 crc kubenswrapper[4706]: I1125 11:53:50.100443 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz2pj\" (UniqueName: \"kubernetes.io/projected/3c49be9b-0e12-4db2-82be-3415441f57d4-kube-api-access-rz2pj\") pod \"ovsdbserver-nb-0\" (UID: \"3c49be9b-0e12-4db2-82be-3415441f57d4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 11:53:50 crc kubenswrapper[4706]: I1125 11:53:50.245326 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 25 11:53:51 crc kubenswrapper[4706]: I1125 11:53:51.844756 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 25 11:53:52 crc kubenswrapper[4706]: W1125 11:53:52.299927 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64ca6766_8491_40bc_a14e_eb866edf3fe8.slice/crio-0ede942c509609b03c1e8ef95c8179f4984ff7bb9bb810773063871d405d637c WatchSource:0}: Error finding container 0ede942c509609b03c1e8ef95c8179f4984ff7bb9bb810773063871d405d637c: Status 404 returned error can't find the container with id 0ede942c509609b03c1e8ef95c8179f4984ff7bb9bb810773063871d405d637c Nov 25 11:53:52 crc kubenswrapper[4706]: E1125 11:53:52.314032 4706 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 25 11:53:52 crc kubenswrapper[4706]: E1125 11:53:52.314274 4706 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qklzt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-8nl6d_openstack(88a1c39b-1b4a-4227-bb11-a80bdb52b74b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 11:53:52 crc kubenswrapper[4706]: E1125 11:53:52.316766 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-8nl6d" podUID="88a1c39b-1b4a-4227-bb11-a80bdb52b74b" Nov 25 11:53:52 crc kubenswrapper[4706]: E1125 11:53:52.319984 4706 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 25 11:53:52 crc kubenswrapper[4706]: E1125 11:53:52.320193 4706 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7fws6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-rf649_openstack(257a89c8-b58c-44ea-9e51-b40a35f5e08f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 11:53:52 crc kubenswrapper[4706]: E1125 11:53:52.321530 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-rf649" podUID="257a89c8-b58c-44ea-9e51-b40a35f5e08f" Nov 25 11:53:52 crc kubenswrapper[4706]: I1125 11:53:52.386337 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"64ca6766-8491-40bc-a14e-eb866edf3fe8","Type":"ContainerStarted","Data":"0ede942c509609b03c1e8ef95c8179f4984ff7bb9bb810773063871d405d637c"} Nov 25 11:53:52 crc kubenswrapper[4706]: I1125 11:53:52.925232 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-rf649" Nov 25 11:53:52 crc kubenswrapper[4706]: I1125 11:53:52.992489 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.016891 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/257a89c8-b58c-44ea-9e51-b40a35f5e08f-config\") pod \"257a89c8-b58c-44ea-9e51-b40a35f5e08f\" (UID: \"257a89c8-b58c-44ea-9e51-b40a35f5e08f\") " Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.017016 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fws6\" (UniqueName: \"kubernetes.io/projected/257a89c8-b58c-44ea-9e51-b40a35f5e08f-kube-api-access-7fws6\") pod \"257a89c8-b58c-44ea-9e51-b40a35f5e08f\" (UID: \"257a89c8-b58c-44ea-9e51-b40a35f5e08f\") " Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.017745 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/257a89c8-b58c-44ea-9e51-b40a35f5e08f-config" (OuterVolumeSpecName: "config") pod "257a89c8-b58c-44ea-9e51-b40a35f5e08f" (UID: "257a89c8-b58c-44ea-9e51-b40a35f5e08f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.024902 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/257a89c8-b58c-44ea-9e51-b40a35f5e08f-kube-api-access-7fws6" (OuterVolumeSpecName: "kube-api-access-7fws6") pod "257a89c8-b58c-44ea-9e51-b40a35f5e08f" (UID: "257a89c8-b58c-44ea-9e51-b40a35f5e08f"). InnerVolumeSpecName "kube-api-access-7fws6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.042252 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8nl6d" Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.066899 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.118720 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88a1c39b-1b4a-4227-bb11-a80bdb52b74b-config\") pod \"88a1c39b-1b4a-4227-bb11-a80bdb52b74b\" (UID: \"88a1c39b-1b4a-4227-bb11-a80bdb52b74b\") " Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.118794 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qklzt\" (UniqueName: \"kubernetes.io/projected/88a1c39b-1b4a-4227-bb11-a80bdb52b74b-kube-api-access-qklzt\") pod \"88a1c39b-1b4a-4227-bb11-a80bdb52b74b\" (UID: \"88a1c39b-1b4a-4227-bb11-a80bdb52b74b\") " Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.118825 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88a1c39b-1b4a-4227-bb11-a80bdb52b74b-dns-svc\") pod \"88a1c39b-1b4a-4227-bb11-a80bdb52b74b\" (UID: \"88a1c39b-1b4a-4227-bb11-a80bdb52b74b\") " Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.119612 4706 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/257a89c8-b58c-44ea-9e51-b40a35f5e08f-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.119640 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fws6\" (UniqueName: \"kubernetes.io/projected/257a89c8-b58c-44ea-9e51-b40a35f5e08f-kube-api-access-7fws6\") on node \"crc\" DevicePath \"\"" Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.120009 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88a1c39b-1b4a-4227-bb11-a80bdb52b74b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "88a1c39b-1b4a-4227-bb11-a80bdb52b74b" (UID: "88a1c39b-1b4a-4227-bb11-a80bdb52b74b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.120059 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88a1c39b-1b4a-4227-bb11-a80bdb52b74b-config" (OuterVolumeSpecName: "config") pod "88a1c39b-1b4a-4227-bb11-a80bdb52b74b" (UID: "88a1c39b-1b4a-4227-bb11-a80bdb52b74b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.123709 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88a1c39b-1b4a-4227-bb11-a80bdb52b74b-kube-api-access-qklzt" (OuterVolumeSpecName: "kube-api-access-qklzt") pod "88a1c39b-1b4a-4227-bb11-a80bdb52b74b" (UID: "88a1c39b-1b4a-4227-bb11-a80bdb52b74b"). InnerVolumeSpecName "kube-api-access-qklzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.221637 4706 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88a1c39b-1b4a-4227-bb11-a80bdb52b74b-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.222125 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qklzt\" (UniqueName: \"kubernetes.io/projected/88a1c39b-1b4a-4227-bb11-a80bdb52b74b-kube-api-access-qklzt\") on node \"crc\" DevicePath \"\"" Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.222167 4706 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88a1c39b-1b4a-4227-bb11-a80bdb52b74b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.396505 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8nl6d" Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.396496 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-8nl6d" event={"ID":"88a1c39b-1b4a-4227-bb11-a80bdb52b74b","Type":"ContainerDied","Data":"8e1e05197eca252c97bf1d56b4dcf76fb24810b987aba5e5a8fa2792b367956c"} Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.403176 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-rf649" Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.403230 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-rf649" event={"ID":"257a89c8-b58c-44ea-9e51-b40a35f5e08f","Type":"ContainerDied","Data":"a2f81a92dd331de7b779062ab6c1ad2a28b448fa5e8c2e6537bbf8d552091ae4"} Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.413081 4706 generic.go:334] "Generic (PLEG): container finished" podID="9a9e827f-acb8-4b85-90f6-c5cd8634f430" containerID="7ac48e2fd686e5b1d32a9362e9d6ea4dfee03d3cf448f17bf2abf4488c269da4" exitCode=0 Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.413170 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-h7s7b" event={"ID":"9a9e827f-acb8-4b85-90f6-c5cd8634f430","Type":"ContainerDied","Data":"7ac48e2fd686e5b1d32a9362e9d6ea4dfee03d3cf448f17bf2abf4488c269da4"} Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.413960 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.415232 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ed6df424-6b86-44a1-8157-ca1f33167065","Type":"ContainerStarted","Data":"2c62b2da6cecc1094593b01a658bff0960e2926bf47f53eedc829086b96fc4bf"} Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.424472 4706 generic.go:334] "Generic (PLEG): container finished" podID="d1f830dd-11b4-4ef5-bec1-796d0c51c8bb" containerID="6cbb9d165dc382e1af0082038b60778ab403fd295b501fb238f3a0d51d58aa8e" exitCode=0 Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.424562 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zfbpp" event={"ID":"d1f830dd-11b4-4ef5-bec1-796d0c51c8bb","Type":"ContainerDied","Data":"6cbb9d165dc382e1af0082038b60778ab403fd295b501fb238f3a0d51d58aa8e"} Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.425425 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.426983 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"557c84e6-ab5c-40c1-a3e1-68b513874f9b","Type":"ContainerStarted","Data":"7fe2413dd3808510c21fe3331bee85b8d76dabd55d8dc71416b890443ce1c08e"} Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.438736 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.446416 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kd65v"] Nov 25 11:53:53 crc kubenswrapper[4706]: W1125 11:53:53.559728 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23b72526_ef77_4128_a880_6df46f5db440.slice/crio-deade87f739d8d85fb2c0c648338cd950eec91487d6fe60d91acf58a27792b64 WatchSource:0}: Error finding container deade87f739d8d85fb2c0c648338cd950eec91487d6fe60d91acf58a27792b64: Status 404 returned error can't find the container with id deade87f739d8d85fb2c0c648338cd950eec91487d6fe60d91acf58a27792b64 Nov 25 11:53:53 crc kubenswrapper[4706]: W1125 11:53:53.574642 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c49be9b_0e12_4db2_82be_3415441f57d4.slice/crio-6080e20e2c6b58d6ab7f9c0756189f8651b0f678ed863c1f555d21354f3f04d9 WatchSource:0}: Error finding container 6080e20e2c6b58d6ab7f9c0756189f8651b0f678ed863c1f555d21354f3f04d9: Status 404 returned error can't find the container with id 6080e20e2c6b58d6ab7f9c0756189f8651b0f678ed863c1f555d21354f3f04d9 Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.583111 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.631551 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-q8rmg"] Nov 25 11:53:53 crc kubenswrapper[4706]: W1125 11:53:53.656118 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2035192_0066_4761_b5a8_2684c95f20ff.slice/crio-60c34e277835089b5c55d34271c6ea76501b3f1545ee634c9e3e3f06d412bbed WatchSource:0}: Error finding container 60c34e277835089b5c55d34271c6ea76501b3f1545ee634c9e3e3f06d412bbed: Status 404 returned error can't find the container with id 60c34e277835089b5c55d34271c6ea76501b3f1545ee634c9e3e3f06d412bbed Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.696602 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rf649"] Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.701820 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rf649"] Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.730572 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8nl6d"] Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.737468 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8nl6d"] Nov 25 11:53:53 crc kubenswrapper[4706]: E1125 11:53:53.769467 4706 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Nov 25 11:53:53 crc kubenswrapper[4706]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/d1f830dd-11b4-4ef5-bec1-796d0c51c8bb/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Nov 25 11:53:53 crc kubenswrapper[4706]: > podSandboxID="997b3e4a9423d2495f6042b0acefc68df9a33827c62d123d4cf399a42e6dc366" Nov 25 11:53:53 crc kubenswrapper[4706]: E1125 11:53:53.769684 4706 kuberuntime_manager.go:1274] "Unhandled Error" err=< Nov 25 11:53:53 crc kubenswrapper[4706]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-97qnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-zfbpp_openstack(d1f830dd-11b4-4ef5-bec1-796d0c51c8bb): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/d1f830dd-11b4-4ef5-bec1-796d0c51c8bb/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Nov 25 11:53:53 crc kubenswrapper[4706]: > logger="UnhandledError" Nov 25 11:53:53 crc kubenswrapper[4706]: E1125 11:53:53.770936 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/d1f830dd-11b4-4ef5-bec1-796d0c51c8bb/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-666b6646f7-zfbpp" podUID="d1f830dd-11b4-4ef5-bec1-796d0c51c8bb" Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.948049 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="257a89c8-b58c-44ea-9e51-b40a35f5e08f" path="/var/lib/kubelet/pods/257a89c8-b58c-44ea-9e51-b40a35f5e08f/volumes" Nov 25 11:53:53 crc kubenswrapper[4706]: I1125 11:53:53.949823 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88a1c39b-1b4a-4227-bb11-a80bdb52b74b" path="/var/lib/kubelet/pods/88a1c39b-1b4a-4227-bb11-a80bdb52b74b/volumes" Nov 25 11:53:54 crc kubenswrapper[4706]: I1125 11:53:54.283142 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 25 11:53:54 crc kubenswrapper[4706]: W1125 11:53:54.295550 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod752cf7db_684f_4a5a_8a03_717e69810056.slice/crio-a472256a518ac38733743d1a1fa506ec0dc827d6c8b82543684e1eea90b22236 WatchSource:0}: Error finding container a472256a518ac38733743d1a1fa506ec0dc827d6c8b82543684e1eea90b22236: Status 404 returned error can't find the container with id a472256a518ac38733743d1a1fa506ec0dc827d6c8b82543684e1eea90b22236 Nov 25 11:53:54 crc kubenswrapper[4706]: I1125 11:53:54.437335 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"752cf7db-684f-4a5a-8a03-717e69810056","Type":"ContainerStarted","Data":"a472256a518ac38733743d1a1fa506ec0dc827d6c8b82543684e1eea90b22236"} Nov 25 11:53:54 crc kubenswrapper[4706]: I1125 11:53:54.439037 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-q8rmg" event={"ID":"a2035192-0066-4761-b5a8-2684c95f20ff","Type":"ContainerStarted","Data":"60c34e277835089b5c55d34271c6ea76501b3f1545ee634c9e3e3f06d412bbed"} Nov 25 11:53:54 crc kubenswrapper[4706]: I1125 11:53:54.442794 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-h7s7b" event={"ID":"9a9e827f-acb8-4b85-90f6-c5cd8634f430","Type":"ContainerStarted","Data":"1e13c0a80d21fdc0c68f90a2ea58a3ce1b4b4b184832ad2d127608fa42479869"} Nov 25 11:53:54 crc kubenswrapper[4706]: I1125 11:53:54.443496 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-h7s7b" Nov 25 11:53:54 crc kubenswrapper[4706]: I1125 11:53:54.446865 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"37118d82-a55d-4a10-8b2c-6e5cf036474c","Type":"ContainerStarted","Data":"257c5dfe5f978a54a825f7f983ec837b1966b2be6af332902e650626ea4e1c2c"} Nov 25 11:53:54 crc kubenswrapper[4706]: I1125 11:53:54.449749 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"36bf3efe-847b-4896-878f-1f06e582bf01","Type":"ContainerStarted","Data":"9969127691be8ba0b6f14ea55005e7b6663f2b9d0e14d10df92856e820083c36"} Nov 25 11:53:54 crc kubenswrapper[4706]: I1125 11:53:54.453289 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3c49be9b-0e12-4db2-82be-3415441f57d4","Type":"ContainerStarted","Data":"6080e20e2c6b58d6ab7f9c0756189f8651b0f678ed863c1f555d21354f3f04d9"} Nov 25 11:53:54 crc kubenswrapper[4706]: I1125 11:53:54.456908 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"49e77cd2-5940-4ae6-9418-d069ce012ad7","Type":"ContainerStarted","Data":"910b7830d84b9d842cd18c059b3852d6e0ac44d81a05d46b194b66ec8e7b9e53"} Nov 25 11:53:54 crc kubenswrapper[4706]: I1125 11:53:54.466319 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-h7s7b" podStartSLOduration=7.172930133 podStartE2EDuration="18.466277071s" podCreationTimestamp="2025-11-25 11:53:36 +0000 UTC" firstStartedPulling="2025-11-25 11:53:41.172376575 +0000 UTC m=+1030.086933976" lastFinishedPulling="2025-11-25 11:53:52.465723533 +0000 UTC m=+1041.380280914" observedRunningTime="2025-11-25 11:53:54.464808754 +0000 UTC m=+1043.379366145" watchObservedRunningTime="2025-11-25 11:53:54.466277071 +0000 UTC m=+1043.380834452" Nov 25 11:53:54 crc kubenswrapper[4706]: I1125 11:53:54.470168 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kd65v" event={"ID":"23b72526-ef77-4128-a880-6df46f5db440","Type":"ContainerStarted","Data":"deade87f739d8d85fb2c0c648338cd950eec91487d6fe60d91acf58a27792b64"} Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.228916 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-9sjfp"] Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.230923 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-9sjfp" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.238397 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-9sjfp"] Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.238429 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.352269 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/39f1459f-1764-4a48-8363-b32ac9350cdb-ovn-rundir\") pod \"ovn-controller-metrics-9sjfp\" (UID: \"39f1459f-1764-4a48-8363-b32ac9350cdb\") " pod="openstack/ovn-controller-metrics-9sjfp" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.352694 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39f1459f-1764-4a48-8363-b32ac9350cdb-config\") pod \"ovn-controller-metrics-9sjfp\" (UID: \"39f1459f-1764-4a48-8363-b32ac9350cdb\") " pod="openstack/ovn-controller-metrics-9sjfp" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.352743 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4whtq\" (UniqueName: \"kubernetes.io/projected/39f1459f-1764-4a48-8363-b32ac9350cdb-kube-api-access-4whtq\") pod \"ovn-controller-metrics-9sjfp\" (UID: \"39f1459f-1764-4a48-8363-b32ac9350cdb\") " pod="openstack/ovn-controller-metrics-9sjfp" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.352781 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/39f1459f-1764-4a48-8363-b32ac9350cdb-ovs-rundir\") pod \"ovn-controller-metrics-9sjfp\" (UID: \"39f1459f-1764-4a48-8363-b32ac9350cdb\") " pod="openstack/ovn-controller-metrics-9sjfp" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.352802 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/39f1459f-1764-4a48-8363-b32ac9350cdb-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-9sjfp\" (UID: \"39f1459f-1764-4a48-8363-b32ac9350cdb\") " pod="openstack/ovn-controller-metrics-9sjfp" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.352842 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f1459f-1764-4a48-8363-b32ac9350cdb-combined-ca-bundle\") pod \"ovn-controller-metrics-9sjfp\" (UID: \"39f1459f-1764-4a48-8363-b32ac9350cdb\") " pod="openstack/ovn-controller-metrics-9sjfp" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.395454 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zfbpp"] Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.428294 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-zk4cz"] Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.431464 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-zk4cz" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.438244 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.447228 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-zk4cz"] Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.453633 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/39f1459f-1764-4a48-8363-b32ac9350cdb-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-9sjfp\" (UID: \"39f1459f-1764-4a48-8363-b32ac9350cdb\") " pod="openstack/ovn-controller-metrics-9sjfp" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.453943 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f1459f-1764-4a48-8363-b32ac9350cdb-combined-ca-bundle\") pod \"ovn-controller-metrics-9sjfp\" (UID: \"39f1459f-1764-4a48-8363-b32ac9350cdb\") " pod="openstack/ovn-controller-metrics-9sjfp" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.454077 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/39f1459f-1764-4a48-8363-b32ac9350cdb-ovn-rundir\") pod \"ovn-controller-metrics-9sjfp\" (UID: \"39f1459f-1764-4a48-8363-b32ac9350cdb\") " pod="openstack/ovn-controller-metrics-9sjfp" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.454217 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39f1459f-1764-4a48-8363-b32ac9350cdb-config\") pod \"ovn-controller-metrics-9sjfp\" (UID: \"39f1459f-1764-4a48-8363-b32ac9350cdb\") " pod="openstack/ovn-controller-metrics-9sjfp" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.454377 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4whtq\" (UniqueName: \"kubernetes.io/projected/39f1459f-1764-4a48-8363-b32ac9350cdb-kube-api-access-4whtq\") pod \"ovn-controller-metrics-9sjfp\" (UID: \"39f1459f-1764-4a48-8363-b32ac9350cdb\") " pod="openstack/ovn-controller-metrics-9sjfp" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.454983 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/39f1459f-1764-4a48-8363-b32ac9350cdb-ovs-rundir\") pod \"ovn-controller-metrics-9sjfp\" (UID: \"39f1459f-1764-4a48-8363-b32ac9350cdb\") " pod="openstack/ovn-controller-metrics-9sjfp" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.455466 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/39f1459f-1764-4a48-8363-b32ac9350cdb-ovs-rundir\") pod \"ovn-controller-metrics-9sjfp\" (UID: \"39f1459f-1764-4a48-8363-b32ac9350cdb\") " pod="openstack/ovn-controller-metrics-9sjfp" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.456391 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/39f1459f-1764-4a48-8363-b32ac9350cdb-ovn-rundir\") pod \"ovn-controller-metrics-9sjfp\" (UID: \"39f1459f-1764-4a48-8363-b32ac9350cdb\") " pod="openstack/ovn-controller-metrics-9sjfp" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.457569 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39f1459f-1764-4a48-8363-b32ac9350cdb-config\") pod \"ovn-controller-metrics-9sjfp\" (UID: \"39f1459f-1764-4a48-8363-b32ac9350cdb\") " pod="openstack/ovn-controller-metrics-9sjfp" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.463929 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f1459f-1764-4a48-8363-b32ac9350cdb-combined-ca-bundle\") pod \"ovn-controller-metrics-9sjfp\" (UID: \"39f1459f-1764-4a48-8363-b32ac9350cdb\") " pod="openstack/ovn-controller-metrics-9sjfp" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.475285 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/39f1459f-1764-4a48-8363-b32ac9350cdb-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-9sjfp\" (UID: \"39f1459f-1764-4a48-8363-b32ac9350cdb\") " pod="openstack/ovn-controller-metrics-9sjfp" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.482088 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4whtq\" (UniqueName: \"kubernetes.io/projected/39f1459f-1764-4a48-8363-b32ac9350cdb-kube-api-access-4whtq\") pod \"ovn-controller-metrics-9sjfp\" (UID: \"39f1459f-1764-4a48-8363-b32ac9350cdb\") " pod="openstack/ovn-controller-metrics-9sjfp" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.557561 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-9sjfp" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.560560 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-h7s7b"] Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.561874 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-h7s7b" podUID="9a9e827f-acb8-4b85-90f6-c5cd8634f430" containerName="dnsmasq-dns" containerID="cri-o://1e13c0a80d21fdc0c68f90a2ea58a3ce1b4b4b184832ad2d127608fa42479869" gracePeriod=10 Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.562011 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db081b9a-d1e3-42e7-904e-acb26e50cfd4-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-zk4cz\" (UID: \"db081b9a-d1e3-42e7-904e-acb26e50cfd4\") " pod="openstack/dnsmasq-dns-7f896c8c65-zk4cz" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.562096 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db081b9a-d1e3-42e7-904e-acb26e50cfd4-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-zk4cz\" (UID: \"db081b9a-d1e3-42e7-904e-acb26e50cfd4\") " pod="openstack/dnsmasq-dns-7f896c8c65-zk4cz" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.562181 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db081b9a-d1e3-42e7-904e-acb26e50cfd4-config\") pod \"dnsmasq-dns-7f896c8c65-zk4cz\" (UID: \"db081b9a-d1e3-42e7-904e-acb26e50cfd4\") " pod="openstack/dnsmasq-dns-7f896c8c65-zk4cz" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.562261 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjdsj\" (UniqueName: \"kubernetes.io/projected/db081b9a-d1e3-42e7-904e-acb26e50cfd4-kube-api-access-vjdsj\") pod \"dnsmasq-dns-7f896c8c65-zk4cz\" (UID: \"db081b9a-d1e3-42e7-904e-acb26e50cfd4\") " pod="openstack/dnsmasq-dns-7f896c8c65-zk4cz" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.568379 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-h7s7b" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.625043 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-zzvxf"] Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.629257 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-zzvxf" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.636237 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.655509 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-zzvxf"] Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.667145 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db081b9a-d1e3-42e7-904e-acb26e50cfd4-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-zk4cz\" (UID: \"db081b9a-d1e3-42e7-904e-acb26e50cfd4\") " pod="openstack/dnsmasq-dns-7f896c8c65-zk4cz" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.667235 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db081b9a-d1e3-42e7-904e-acb26e50cfd4-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-zk4cz\" (UID: \"db081b9a-d1e3-42e7-904e-acb26e50cfd4\") " pod="openstack/dnsmasq-dns-7f896c8c65-zk4cz" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.667324 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db081b9a-d1e3-42e7-904e-acb26e50cfd4-config\") pod \"dnsmasq-dns-7f896c8c65-zk4cz\" (UID: \"db081b9a-d1e3-42e7-904e-acb26e50cfd4\") " pod="openstack/dnsmasq-dns-7f896c8c65-zk4cz" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.671486 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db081b9a-d1e3-42e7-904e-acb26e50cfd4-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-zk4cz\" (UID: \"db081b9a-d1e3-42e7-904e-acb26e50cfd4\") " pod="openstack/dnsmasq-dns-7f896c8c65-zk4cz" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.671574 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db081b9a-d1e3-42e7-904e-acb26e50cfd4-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-zk4cz\" (UID: \"db081b9a-d1e3-42e7-904e-acb26e50cfd4\") " pod="openstack/dnsmasq-dns-7f896c8c65-zk4cz" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.672039 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db081b9a-d1e3-42e7-904e-acb26e50cfd4-config\") pod \"dnsmasq-dns-7f896c8c65-zk4cz\" (UID: \"db081b9a-d1e3-42e7-904e-acb26e50cfd4\") " pod="openstack/dnsmasq-dns-7f896c8c65-zk4cz" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.672080 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjdsj\" (UniqueName: \"kubernetes.io/projected/db081b9a-d1e3-42e7-904e-acb26e50cfd4-kube-api-access-vjdsj\") pod \"dnsmasq-dns-7f896c8c65-zk4cz\" (UID: \"db081b9a-d1e3-42e7-904e-acb26e50cfd4\") " pod="openstack/dnsmasq-dns-7f896c8c65-zk4cz" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.698032 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjdsj\" (UniqueName: \"kubernetes.io/projected/db081b9a-d1e3-42e7-904e-acb26e50cfd4-kube-api-access-vjdsj\") pod \"dnsmasq-dns-7f896c8c65-zk4cz\" (UID: \"db081b9a-d1e3-42e7-904e-acb26e50cfd4\") " pod="openstack/dnsmasq-dns-7f896c8c65-zk4cz" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.774454 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ed007e8-82f1-4ff7-9f34-ce6656e77cfb-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-zzvxf\" (UID: \"1ed007e8-82f1-4ff7-9f34-ce6656e77cfb\") " pod="openstack/dnsmasq-dns-86db49b7ff-zzvxf" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.774574 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ed007e8-82f1-4ff7-9f34-ce6656e77cfb-config\") pod \"dnsmasq-dns-86db49b7ff-zzvxf\" (UID: \"1ed007e8-82f1-4ff7-9f34-ce6656e77cfb\") " pod="openstack/dnsmasq-dns-86db49b7ff-zzvxf" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.775465 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ed007e8-82f1-4ff7-9f34-ce6656e77cfb-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-zzvxf\" (UID: \"1ed007e8-82f1-4ff7-9f34-ce6656e77cfb\") " pod="openstack/dnsmasq-dns-86db49b7ff-zzvxf" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.775722 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ed007e8-82f1-4ff7-9f34-ce6656e77cfb-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-zzvxf\" (UID: \"1ed007e8-82f1-4ff7-9f34-ce6656e77cfb\") " pod="openstack/dnsmasq-dns-86db49b7ff-zzvxf" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.775826 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj4lp\" (UniqueName: \"kubernetes.io/projected/1ed007e8-82f1-4ff7-9f34-ce6656e77cfb-kube-api-access-rj4lp\") pod \"dnsmasq-dns-86db49b7ff-zzvxf\" (UID: \"1ed007e8-82f1-4ff7-9f34-ce6656e77cfb\") " pod="openstack/dnsmasq-dns-86db49b7ff-zzvxf" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.828262 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-zk4cz" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.879311 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ed007e8-82f1-4ff7-9f34-ce6656e77cfb-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-zzvxf\" (UID: \"1ed007e8-82f1-4ff7-9f34-ce6656e77cfb\") " pod="openstack/dnsmasq-dns-86db49b7ff-zzvxf" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.878337 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ed007e8-82f1-4ff7-9f34-ce6656e77cfb-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-zzvxf\" (UID: \"1ed007e8-82f1-4ff7-9f34-ce6656e77cfb\") " pod="openstack/dnsmasq-dns-86db49b7ff-zzvxf" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.879503 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ed007e8-82f1-4ff7-9f34-ce6656e77cfb-config\") pod \"dnsmasq-dns-86db49b7ff-zzvxf\" (UID: \"1ed007e8-82f1-4ff7-9f34-ce6656e77cfb\") " pod="openstack/dnsmasq-dns-86db49b7ff-zzvxf" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.879569 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ed007e8-82f1-4ff7-9f34-ce6656e77cfb-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-zzvxf\" (UID: \"1ed007e8-82f1-4ff7-9f34-ce6656e77cfb\") " pod="openstack/dnsmasq-dns-86db49b7ff-zzvxf" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.879671 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ed007e8-82f1-4ff7-9f34-ce6656e77cfb-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-zzvxf\" (UID: \"1ed007e8-82f1-4ff7-9f34-ce6656e77cfb\") " pod="openstack/dnsmasq-dns-86db49b7ff-zzvxf" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.879749 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj4lp\" (UniqueName: \"kubernetes.io/projected/1ed007e8-82f1-4ff7-9f34-ce6656e77cfb-kube-api-access-rj4lp\") pod \"dnsmasq-dns-86db49b7ff-zzvxf\" (UID: \"1ed007e8-82f1-4ff7-9f34-ce6656e77cfb\") " pod="openstack/dnsmasq-dns-86db49b7ff-zzvxf" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.880716 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ed007e8-82f1-4ff7-9f34-ce6656e77cfb-config\") pod \"dnsmasq-dns-86db49b7ff-zzvxf\" (UID: \"1ed007e8-82f1-4ff7-9f34-ce6656e77cfb\") " pod="openstack/dnsmasq-dns-86db49b7ff-zzvxf" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.880800 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ed007e8-82f1-4ff7-9f34-ce6656e77cfb-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-zzvxf\" (UID: \"1ed007e8-82f1-4ff7-9f34-ce6656e77cfb\") " pod="openstack/dnsmasq-dns-86db49b7ff-zzvxf" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.881080 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ed007e8-82f1-4ff7-9f34-ce6656e77cfb-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-zzvxf\" (UID: \"1ed007e8-82f1-4ff7-9f34-ce6656e77cfb\") " pod="openstack/dnsmasq-dns-86db49b7ff-zzvxf" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.903856 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj4lp\" (UniqueName: \"kubernetes.io/projected/1ed007e8-82f1-4ff7-9f34-ce6656e77cfb-kube-api-access-rj4lp\") pod \"dnsmasq-dns-86db49b7ff-zzvxf\" (UID: \"1ed007e8-82f1-4ff7-9f34-ce6656e77cfb\") " pod="openstack/dnsmasq-dns-86db49b7ff-zzvxf" Nov 25 11:53:59 crc kubenswrapper[4706]: I1125 11:53:59.957877 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-zzvxf" Nov 25 11:54:00 crc kubenswrapper[4706]: I1125 11:54:00.544817 4706 generic.go:334] "Generic (PLEG): container finished" podID="9a9e827f-acb8-4b85-90f6-c5cd8634f430" containerID="1e13c0a80d21fdc0c68f90a2ea58a3ce1b4b4b184832ad2d127608fa42479869" exitCode=0 Nov 25 11:54:00 crc kubenswrapper[4706]: I1125 11:54:00.544876 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-h7s7b" event={"ID":"9a9e827f-acb8-4b85-90f6-c5cd8634f430","Type":"ContainerDied","Data":"1e13c0a80d21fdc0c68f90a2ea58a3ce1b4b4b184832ad2d127608fa42479869"} Nov 25 11:54:01 crc kubenswrapper[4706]: I1125 11:54:01.755062 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-h7s7b" podUID="9a9e827f-acb8-4b85-90f6-c5cd8634f430" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.96:5353: connect: connection refused" Nov 25 11:54:02 crc kubenswrapper[4706]: I1125 11:54:02.894103 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-9sjfp"] Nov 25 11:54:03 crc kubenswrapper[4706]: I1125 11:54:03.012523 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-h7s7b" Nov 25 11:54:03 crc kubenswrapper[4706]: I1125 11:54:03.046725 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-495qr\" (UniqueName: \"kubernetes.io/projected/9a9e827f-acb8-4b85-90f6-c5cd8634f430-kube-api-access-495qr\") pod \"9a9e827f-acb8-4b85-90f6-c5cd8634f430\" (UID: \"9a9e827f-acb8-4b85-90f6-c5cd8634f430\") " Nov 25 11:54:03 crc kubenswrapper[4706]: I1125 11:54:03.046777 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a9e827f-acb8-4b85-90f6-c5cd8634f430-dns-svc\") pod \"9a9e827f-acb8-4b85-90f6-c5cd8634f430\" (UID: \"9a9e827f-acb8-4b85-90f6-c5cd8634f430\") " Nov 25 11:54:03 crc kubenswrapper[4706]: I1125 11:54:03.071372 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a9e827f-acb8-4b85-90f6-c5cd8634f430-kube-api-access-495qr" (OuterVolumeSpecName: "kube-api-access-495qr") pod "9a9e827f-acb8-4b85-90f6-c5cd8634f430" (UID: "9a9e827f-acb8-4b85-90f6-c5cd8634f430"). InnerVolumeSpecName "kube-api-access-495qr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:54:03 crc kubenswrapper[4706]: I1125 11:54:03.130010 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a9e827f-acb8-4b85-90f6-c5cd8634f430-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9a9e827f-acb8-4b85-90f6-c5cd8634f430" (UID: "9a9e827f-acb8-4b85-90f6-c5cd8634f430"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:54:03 crc kubenswrapper[4706]: I1125 11:54:03.147591 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a9e827f-acb8-4b85-90f6-c5cd8634f430-config\") pod \"9a9e827f-acb8-4b85-90f6-c5cd8634f430\" (UID: \"9a9e827f-acb8-4b85-90f6-c5cd8634f430\") " Nov 25 11:54:03 crc kubenswrapper[4706]: I1125 11:54:03.147864 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-495qr\" (UniqueName: \"kubernetes.io/projected/9a9e827f-acb8-4b85-90f6-c5cd8634f430-kube-api-access-495qr\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:03 crc kubenswrapper[4706]: I1125 11:54:03.147882 4706 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a9e827f-acb8-4b85-90f6-c5cd8634f430-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:03 crc kubenswrapper[4706]: I1125 11:54:03.220945 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a9e827f-acb8-4b85-90f6-c5cd8634f430-config" (OuterVolumeSpecName: "config") pod "9a9e827f-acb8-4b85-90f6-c5cd8634f430" (UID: "9a9e827f-acb8-4b85-90f6-c5cd8634f430"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:54:03 crc kubenswrapper[4706]: I1125 11:54:03.249452 4706 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a9e827f-acb8-4b85-90f6-c5cd8634f430-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:03 crc kubenswrapper[4706]: I1125 11:54:03.578847 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-9sjfp" event={"ID":"39f1459f-1764-4a48-8363-b32ac9350cdb","Type":"ContainerStarted","Data":"ad7648b7af2dd88c324ee95ccd9802bd132e12023f6b5c32f91e99f7150041e4"} Nov 25 11:54:03 crc kubenswrapper[4706]: I1125 11:54:03.580755 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-h7s7b" event={"ID":"9a9e827f-acb8-4b85-90f6-c5cd8634f430","Type":"ContainerDied","Data":"a623898070181ffd5e670c3c6ef8362ec3559af24564dc55c573fdf4f3bdd0ae"} Nov 25 11:54:03 crc kubenswrapper[4706]: I1125 11:54:03.580827 4706 scope.go:117] "RemoveContainer" containerID="1e13c0a80d21fdc0c68f90a2ea58a3ce1b4b4b184832ad2d127608fa42479869" Nov 25 11:54:03 crc kubenswrapper[4706]: I1125 11:54:03.580984 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-h7s7b" Nov 25 11:54:03 crc kubenswrapper[4706]: I1125 11:54:03.615230 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-h7s7b"] Nov 25 11:54:03 crc kubenswrapper[4706]: I1125 11:54:03.620600 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-h7s7b"] Nov 25 11:54:03 crc kubenswrapper[4706]: I1125 11:54:03.844490 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-zk4cz"] Nov 25 11:54:03 crc kubenswrapper[4706]: I1125 11:54:03.896616 4706 scope.go:117] "RemoveContainer" containerID="7ac48e2fd686e5b1d32a9362e9d6ea4dfee03d3cf448f17bf2abf4488c269da4" Nov 25 11:54:03 crc kubenswrapper[4706]: I1125 11:54:03.933918 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a9e827f-acb8-4b85-90f6-c5cd8634f430" path="/var/lib/kubelet/pods/9a9e827f-acb8-4b85-90f6-c5cd8634f430/volumes" Nov 25 11:54:04 crc kubenswrapper[4706]: I1125 11:54:04.241701 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-zzvxf"] Nov 25 11:54:04 crc kubenswrapper[4706]: I1125 11:54:04.591469 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-zzvxf" event={"ID":"1ed007e8-82f1-4ff7-9f34-ce6656e77cfb","Type":"ContainerStarted","Data":"7b367b2497056a6d1abb97e124931f5296f0dc47309265c76a40d056483f56c4"} Nov 25 11:54:04 crc kubenswrapper[4706]: I1125 11:54:04.596521 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zfbpp" event={"ID":"d1f830dd-11b4-4ef5-bec1-796d0c51c8bb","Type":"ContainerStarted","Data":"ae49ec0d0298a89bd24477d9682dd0728b1d78bfe214587b2466b7935975edb8"} Nov 25 11:54:04 crc kubenswrapper[4706]: I1125 11:54:04.596578 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-zfbpp" podUID="d1f830dd-11b4-4ef5-bec1-796d0c51c8bb" containerName="dnsmasq-dns" containerID="cri-o://ae49ec0d0298a89bd24477d9682dd0728b1d78bfe214587b2466b7935975edb8" gracePeriod=10 Nov 25 11:54:04 crc kubenswrapper[4706]: I1125 11:54:04.596842 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-zfbpp" Nov 25 11:54:04 crc kubenswrapper[4706]: I1125 11:54:04.598979 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-zk4cz" event={"ID":"db081b9a-d1e3-42e7-904e-acb26e50cfd4","Type":"ContainerStarted","Data":"2384c4b1dbd69f0c13718c7ccfddd25288f89ed14f136bcac014948df2e4aa2b"} Nov 25 11:54:04 crc kubenswrapper[4706]: I1125 11:54:04.617264 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-zfbpp" podStartSLOduration=13.170211586 podStartE2EDuration="28.617239228s" podCreationTimestamp="2025-11-25 11:53:36 +0000 UTC" firstStartedPulling="2025-11-25 11:53:37.007724295 +0000 UTC m=+1025.922281676" lastFinishedPulling="2025-11-25 11:53:52.454751947 +0000 UTC m=+1041.369309318" observedRunningTime="2025-11-25 11:54:04.614041458 +0000 UTC m=+1053.528598859" watchObservedRunningTime="2025-11-25 11:54:04.617239228 +0000 UTC m=+1053.531796609" Nov 25 11:54:05 crc kubenswrapper[4706]: I1125 11:54:05.613398 4706 generic.go:334] "Generic (PLEG): container finished" podID="d1f830dd-11b4-4ef5-bec1-796d0c51c8bb" containerID="ae49ec0d0298a89bd24477d9682dd0728b1d78bfe214587b2466b7935975edb8" exitCode=0 Nov 25 11:54:05 crc kubenswrapper[4706]: I1125 11:54:05.613483 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zfbpp" event={"ID":"d1f830dd-11b4-4ef5-bec1-796d0c51c8bb","Type":"ContainerDied","Data":"ae49ec0d0298a89bd24477d9682dd0728b1d78bfe214587b2466b7935975edb8"} Nov 25 11:54:05 crc kubenswrapper[4706]: I1125 11:54:05.615555 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"64ca6766-8491-40bc-a14e-eb866edf3fe8","Type":"ContainerStarted","Data":"1cf3d99765328eecf21ed05c6b15ee504d82ef7b2748c535fadf58b550590766"} Nov 25 11:54:05 crc kubenswrapper[4706]: I1125 11:54:05.617198 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-zk4cz" event={"ID":"db081b9a-d1e3-42e7-904e-acb26e50cfd4","Type":"ContainerStarted","Data":"d799a3d852ad28f81b6d19d958c2c9410e9f4cc7ff83c5edfc98077eccc1778a"} Nov 25 11:54:05 crc kubenswrapper[4706]: I1125 11:54:05.905865 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zfbpp" Nov 25 11:54:06 crc kubenswrapper[4706]: I1125 11:54:06.015502 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1f830dd-11b4-4ef5-bec1-796d0c51c8bb-config\") pod \"d1f830dd-11b4-4ef5-bec1-796d0c51c8bb\" (UID: \"d1f830dd-11b4-4ef5-bec1-796d0c51c8bb\") " Nov 25 11:54:06 crc kubenswrapper[4706]: I1125 11:54:06.015560 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97qnn\" (UniqueName: \"kubernetes.io/projected/d1f830dd-11b4-4ef5-bec1-796d0c51c8bb-kube-api-access-97qnn\") pod \"d1f830dd-11b4-4ef5-bec1-796d0c51c8bb\" (UID: \"d1f830dd-11b4-4ef5-bec1-796d0c51c8bb\") " Nov 25 11:54:06 crc kubenswrapper[4706]: I1125 11:54:06.015704 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1f830dd-11b4-4ef5-bec1-796d0c51c8bb-dns-svc\") pod \"d1f830dd-11b4-4ef5-bec1-796d0c51c8bb\" (UID: \"d1f830dd-11b4-4ef5-bec1-796d0c51c8bb\") " Nov 25 11:54:06 crc kubenswrapper[4706]: I1125 11:54:06.028940 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1f830dd-11b4-4ef5-bec1-796d0c51c8bb-kube-api-access-97qnn" (OuterVolumeSpecName: "kube-api-access-97qnn") pod "d1f830dd-11b4-4ef5-bec1-796d0c51c8bb" (UID: "d1f830dd-11b4-4ef5-bec1-796d0c51c8bb"). InnerVolumeSpecName "kube-api-access-97qnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:54:06 crc kubenswrapper[4706]: I1125 11:54:06.065218 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1f830dd-11b4-4ef5-bec1-796d0c51c8bb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d1f830dd-11b4-4ef5-bec1-796d0c51c8bb" (UID: "d1f830dd-11b4-4ef5-bec1-796d0c51c8bb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:54:06 crc kubenswrapper[4706]: I1125 11:54:06.069709 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1f830dd-11b4-4ef5-bec1-796d0c51c8bb-config" (OuterVolumeSpecName: "config") pod "d1f830dd-11b4-4ef5-bec1-796d0c51c8bb" (UID: "d1f830dd-11b4-4ef5-bec1-796d0c51c8bb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:54:06 crc kubenswrapper[4706]: I1125 11:54:06.117716 4706 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1f830dd-11b4-4ef5-bec1-796d0c51c8bb-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:06 crc kubenswrapper[4706]: I1125 11:54:06.117761 4706 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1f830dd-11b4-4ef5-bec1-796d0c51c8bb-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:06 crc kubenswrapper[4706]: I1125 11:54:06.117771 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97qnn\" (UniqueName: \"kubernetes.io/projected/d1f830dd-11b4-4ef5-bec1-796d0c51c8bb-kube-api-access-97qnn\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:06 crc kubenswrapper[4706]: I1125 11:54:06.653948 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3c49be9b-0e12-4db2-82be-3415441f57d4","Type":"ContainerStarted","Data":"d0156237c1c89c6da0009b5615fec4897b1af588e7b101a66a7a3f632b8c84ef"} Nov 25 11:54:06 crc kubenswrapper[4706]: I1125 11:54:06.656074 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zfbpp" event={"ID":"d1f830dd-11b4-4ef5-bec1-796d0c51c8bb","Type":"ContainerDied","Data":"997b3e4a9423d2495f6042b0acefc68df9a33827c62d123d4cf399a42e6dc366"} Nov 25 11:54:06 crc kubenswrapper[4706]: I1125 11:54:06.656113 4706 scope.go:117] "RemoveContainer" containerID="ae49ec0d0298a89bd24477d9682dd0728b1d78bfe214587b2466b7935975edb8" Nov 25 11:54:06 crc kubenswrapper[4706]: I1125 11:54:06.656146 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zfbpp" Nov 25 11:54:06 crc kubenswrapper[4706]: I1125 11:54:06.659640 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-zk4cz" event={"ID":"db081b9a-d1e3-42e7-904e-acb26e50cfd4","Type":"ContainerDied","Data":"d799a3d852ad28f81b6d19d958c2c9410e9f4cc7ff83c5edfc98077eccc1778a"} Nov 25 11:54:06 crc kubenswrapper[4706]: I1125 11:54:06.659522 4706 generic.go:334] "Generic (PLEG): container finished" podID="db081b9a-d1e3-42e7-904e-acb26e50cfd4" containerID="d799a3d852ad28f81b6d19d958c2c9410e9f4cc7ff83c5edfc98077eccc1778a" exitCode=0 Nov 25 11:54:06 crc kubenswrapper[4706]: I1125 11:54:06.665684 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ed6df424-6b86-44a1-8157-ca1f33167065","Type":"ContainerStarted","Data":"472e1a1470dd4c66501e097ee3e8181de9d16ed619b7ecc940dc21ed60c2dd09"} Nov 25 11:54:06 crc kubenswrapper[4706]: I1125 11:54:06.727993 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zfbpp"] Nov 25 11:54:06 crc kubenswrapper[4706]: I1125 11:54:06.735748 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zfbpp"] Nov 25 11:54:07 crc kubenswrapper[4706]: I1125 11:54:07.673520 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"557c84e6-ab5c-40c1-a3e1-68b513874f9b","Type":"ContainerStarted","Data":"e103b920c3e3166a3cec4818cbdc4804339d57762b5c16546942f4fc4d6c3c61"} Nov 25 11:54:07 crc kubenswrapper[4706]: I1125 11:54:07.931756 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1f830dd-11b4-4ef5-bec1-796d0c51c8bb" path="/var/lib/kubelet/pods/d1f830dd-11b4-4ef5-bec1-796d0c51c8bb/volumes" Nov 25 11:54:11 crc kubenswrapper[4706]: I1125 11:54:11.402284 4706 scope.go:117] "RemoveContainer" containerID="6cbb9d165dc382e1af0082038b60778ab403fd295b501fb238f3a0d51d58aa8e" Nov 25 11:54:11 crc kubenswrapper[4706]: I1125 11:54:11.768420 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"37118d82-a55d-4a10-8b2c-6e5cf036474c","Type":"ContainerStarted","Data":"7a7f6d066a158d7578ef0681f444b1c643a98015393df584ea078d2e69553502"} Nov 25 11:54:11 crc kubenswrapper[4706]: I1125 11:54:11.769383 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 25 11:54:11 crc kubenswrapper[4706]: I1125 11:54:11.780131 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"49e77cd2-5940-4ae6-9418-d069ce012ad7","Type":"ContainerStarted","Data":"ee7ca1551bd72f1daaf40ac0815cb177b2dd3be0f26a9fda7528346aa12153b7"} Nov 25 11:54:11 crc kubenswrapper[4706]: I1125 11:54:11.782737 4706 generic.go:334] "Generic (PLEG): container finished" podID="1ed007e8-82f1-4ff7-9f34-ce6656e77cfb" containerID="223b93109e8f853c659aabf71fd41a099f0f1663fbf920d5663a699b50dd8ae9" exitCode=0 Nov 25 11:54:11 crc kubenswrapper[4706]: I1125 11:54:11.782790 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-zzvxf" event={"ID":"1ed007e8-82f1-4ff7-9f34-ce6656e77cfb","Type":"ContainerDied","Data":"223b93109e8f853c659aabf71fd41a099f0f1663fbf920d5663a699b50dd8ae9"} Nov 25 11:54:11 crc kubenswrapper[4706]: I1125 11:54:11.807221 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=21.583555329 podStartE2EDuration="31.807188106s" podCreationTimestamp="2025-11-25 11:53:40 +0000 UTC" firstStartedPulling="2025-11-25 11:53:53.57024403 +0000 UTC m=+1042.484801411" lastFinishedPulling="2025-11-25 11:54:03.793876797 +0000 UTC m=+1052.708434188" observedRunningTime="2025-11-25 11:54:11.798342424 +0000 UTC m=+1060.712899805" watchObservedRunningTime="2025-11-25 11:54:11.807188106 +0000 UTC m=+1060.721745487" Nov 25 11:54:11 crc kubenswrapper[4706]: I1125 11:54:11.836602 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kd65v" event={"ID":"23b72526-ef77-4128-a880-6df46f5db440","Type":"ContainerStarted","Data":"2e0b1d6be2049ede0b661033176ca16f31807472a6b609d5d46282227baaede3"} Nov 25 11:54:11 crc kubenswrapper[4706]: I1125 11:54:11.837748 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-kd65v" Nov 25 11:54:11 crc kubenswrapper[4706]: I1125 11:54:11.852715 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"752cf7db-684f-4a5a-8a03-717e69810056","Type":"ContainerStarted","Data":"ab659db8d22db278f218c9a9682ea0d970ef7aeb42820ce047386562a298fe42"} Nov 25 11:54:11 crc kubenswrapper[4706]: I1125 11:54:11.868097 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-zk4cz" event={"ID":"db081b9a-d1e3-42e7-904e-acb26e50cfd4","Type":"ContainerStarted","Data":"32d8033a6ca940c84dba99e29ace5bd982b7f9f34b1abeb1de7ae072abb193a2"} Nov 25 11:54:11 crc kubenswrapper[4706]: I1125 11:54:11.868551 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f896c8c65-zk4cz" Nov 25 11:54:11 crc kubenswrapper[4706]: I1125 11:54:11.871163 4706 generic.go:334] "Generic (PLEG): container finished" podID="a2035192-0066-4761-b5a8-2684c95f20ff" containerID="4375e3500e75928629cfd69490ed65bbdec4f1bfe7d4db27cd1efd8d332c16f3" exitCode=0 Nov 25 11:54:11 crc kubenswrapper[4706]: I1125 11:54:11.871324 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-q8rmg" event={"ID":"a2035192-0066-4761-b5a8-2684c95f20ff","Type":"ContainerDied","Data":"4375e3500e75928629cfd69490ed65bbdec4f1bfe7d4db27cd1efd8d332c16f3"} Nov 25 11:54:11 crc kubenswrapper[4706]: I1125 11:54:11.925302 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-kd65v" podStartSLOduration=15.269802613 podStartE2EDuration="25.925280288s" podCreationTimestamp="2025-11-25 11:53:46 +0000 UTC" firstStartedPulling="2025-11-25 11:53:53.570364993 +0000 UTC m=+1042.484922374" lastFinishedPulling="2025-11-25 11:54:04.225842668 +0000 UTC m=+1053.140400049" observedRunningTime="2025-11-25 11:54:11.924234972 +0000 UTC m=+1060.838792343" watchObservedRunningTime="2025-11-25 11:54:11.925280288 +0000 UTC m=+1060.839837669" Nov 25 11:54:11 crc kubenswrapper[4706]: I1125 11:54:11.958814 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f896c8c65-zk4cz" podStartSLOduration=12.958791922 podStartE2EDuration="12.958791922s" podCreationTimestamp="2025-11-25 11:53:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:54:11.947632601 +0000 UTC m=+1060.862190002" watchObservedRunningTime="2025-11-25 11:54:11.958791922 +0000 UTC m=+1060.873349303" Nov 25 11:54:12 crc kubenswrapper[4706]: I1125 11:54:12.880835 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-9sjfp" event={"ID":"39f1459f-1764-4a48-8363-b32ac9350cdb","Type":"ContainerStarted","Data":"9f053a38cf0d54ba105bc8bfebfd9f5be4cd94d17a83a362a1601c7989ff3b7f"} Nov 25 11:54:12 crc kubenswrapper[4706]: I1125 11:54:12.882752 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"752cf7db-684f-4a5a-8a03-717e69810056","Type":"ContainerStarted","Data":"a75726049784d59f6ae7bbc53b26d8b8b608ed4087e38ccb5c618eab304ec2f9"} Nov 25 11:54:12 crc kubenswrapper[4706]: I1125 11:54:12.885678 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-q8rmg" event={"ID":"a2035192-0066-4761-b5a8-2684c95f20ff","Type":"ContainerStarted","Data":"e973ad2b06362c9f888142ac73eb48ff70915c40c6af82fe6263b4d4e060afe6"} Nov 25 11:54:12 crc kubenswrapper[4706]: I1125 11:54:12.885719 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-q8rmg" event={"ID":"a2035192-0066-4761-b5a8-2684c95f20ff","Type":"ContainerStarted","Data":"10693509fc152943dbf7ba0dc9aca17b6820ed6222c52a1f9972bb9c4fdb0575"} Nov 25 11:54:12 crc kubenswrapper[4706]: I1125 11:54:12.886289 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-q8rmg" Nov 25 11:54:12 crc kubenswrapper[4706]: I1125 11:54:12.886335 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-q8rmg" Nov 25 11:54:12 crc kubenswrapper[4706]: I1125 11:54:12.887469 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"36bf3efe-847b-4896-878f-1f06e582bf01","Type":"ContainerStarted","Data":"e3a2aec33179eda68bbe52b4ebd5be3cb84488f80e0c9546e1dbb54750bc1521"} Nov 25 11:54:12 crc kubenswrapper[4706]: I1125 11:54:12.887836 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 11:54:12 crc kubenswrapper[4706]: I1125 11:54:12.890111 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3c49be9b-0e12-4db2-82be-3415441f57d4","Type":"ContainerStarted","Data":"d5d058d6c98c81368c000ea917b680810ff0db4a133d8a42ddc824ea09e5fb5a"} Nov 25 11:54:12 crc kubenswrapper[4706]: I1125 11:54:12.893257 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-zzvxf" event={"ID":"1ed007e8-82f1-4ff7-9f34-ce6656e77cfb","Type":"ContainerStarted","Data":"5b5db715ec28bff8f551a921a98c6c811d7b69b01abba5fe68fa9717d1e20bb2"} Nov 25 11:54:12 crc kubenswrapper[4706]: I1125 11:54:12.893326 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-zzvxf" Nov 25 11:54:12 crc kubenswrapper[4706]: I1125 11:54:12.933123 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-9sjfp" podStartSLOduration=5.85817557 podStartE2EDuration="13.933101651s" podCreationTimestamp="2025-11-25 11:53:59 +0000 UTC" firstStartedPulling="2025-11-25 11:54:03.439953579 +0000 UTC m=+1052.354510960" lastFinishedPulling="2025-11-25 11:54:11.51487966 +0000 UTC m=+1060.429437041" observedRunningTime="2025-11-25 11:54:12.911683442 +0000 UTC m=+1061.826240833" watchObservedRunningTime="2025-11-25 11:54:12.933101651 +0000 UTC m=+1061.847659042" Nov 25 11:54:12 crc kubenswrapper[4706]: I1125 11:54:12.998780 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-q8rmg" podStartSLOduration=16.919019888 podStartE2EDuration="26.998754343s" podCreationTimestamp="2025-11-25 11:53:46 +0000 UTC" firstStartedPulling="2025-11-25 11:53:53.659874996 +0000 UTC m=+1042.574432377" lastFinishedPulling="2025-11-25 11:54:03.739609451 +0000 UTC m=+1052.654166832" observedRunningTime="2025-11-25 11:54:12.986614088 +0000 UTC m=+1061.901171499" watchObservedRunningTime="2025-11-25 11:54:12.998754343 +0000 UTC m=+1061.913311724" Nov 25 11:54:13 crc kubenswrapper[4706]: I1125 11:54:13.005640 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=14.420757408 podStartE2EDuration="31.005620796s" podCreationTimestamp="2025-11-25 11:53:42 +0000 UTC" firstStartedPulling="2025-11-25 11:53:53.513962384 +0000 UTC m=+1042.428519765" lastFinishedPulling="2025-11-25 11:54:10.098825772 +0000 UTC m=+1059.013383153" observedRunningTime="2025-11-25 11:54:12.934672251 +0000 UTC m=+1061.849229622" watchObservedRunningTime="2025-11-25 11:54:13.005620796 +0000 UTC m=+1061.920178177" Nov 25 11:54:13 crc kubenswrapper[4706]: I1125 11:54:13.015036 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=10.500274592 podStartE2EDuration="28.015014103s" podCreationTimestamp="2025-11-25 11:53:45 +0000 UTC" firstStartedPulling="2025-11-25 11:53:54.298896168 +0000 UTC m=+1043.213453549" lastFinishedPulling="2025-11-25 11:54:11.813635679 +0000 UTC m=+1060.728193060" observedRunningTime="2025-11-25 11:54:13.010841078 +0000 UTC m=+1061.925398469" watchObservedRunningTime="2025-11-25 11:54:13.015014103 +0000 UTC m=+1061.929571484" Nov 25 11:54:13 crc kubenswrapper[4706]: I1125 11:54:13.036386 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-zzvxf" podStartSLOduration=14.03636665 podStartE2EDuration="14.03636665s" podCreationTimestamp="2025-11-25 11:53:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:54:13.034246667 +0000 UTC m=+1061.948804048" watchObservedRunningTime="2025-11-25 11:54:13.03636665 +0000 UTC m=+1061.950924031" Nov 25 11:54:13 crc kubenswrapper[4706]: I1125 11:54:13.063603 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=7.11078415 podStartE2EDuration="25.063579505s" podCreationTimestamp="2025-11-25 11:53:48 +0000 UTC" firstStartedPulling="2025-11-25 11:53:53.57777065 +0000 UTC m=+1042.492328031" lastFinishedPulling="2025-11-25 11:54:11.530566005 +0000 UTC m=+1060.445123386" observedRunningTime="2025-11-25 11:54:13.060265101 +0000 UTC m=+1061.974822502" watchObservedRunningTime="2025-11-25 11:54:13.063579505 +0000 UTC m=+1061.978136896" Nov 25 11:54:13 crc kubenswrapper[4706]: I1125 11:54:13.819204 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 25 11:54:14 crc kubenswrapper[4706]: I1125 11:54:14.245678 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 25 11:54:14 crc kubenswrapper[4706]: I1125 11:54:14.290283 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 25 11:54:14 crc kubenswrapper[4706]: I1125 11:54:14.913796 4706 generic.go:334] "Generic (PLEG): container finished" podID="49e77cd2-5940-4ae6-9418-d069ce012ad7" containerID="ee7ca1551bd72f1daaf40ac0815cb177b2dd3be0f26a9fda7528346aa12153b7" exitCode=0 Nov 25 11:54:14 crc kubenswrapper[4706]: I1125 11:54:14.913849 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"49e77cd2-5940-4ae6-9418-d069ce012ad7","Type":"ContainerDied","Data":"ee7ca1551bd72f1daaf40ac0815cb177b2dd3be0f26a9fda7528346aa12153b7"} Nov 25 11:54:14 crc kubenswrapper[4706]: I1125 11:54:14.916083 4706 generic.go:334] "Generic (PLEG): container finished" podID="64ca6766-8491-40bc-a14e-eb866edf3fe8" containerID="1cf3d99765328eecf21ed05c6b15ee504d82ef7b2748c535fadf58b550590766" exitCode=0 Nov 25 11:54:14 crc kubenswrapper[4706]: I1125 11:54:14.916201 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"64ca6766-8491-40bc-a14e-eb866edf3fe8","Type":"ContainerDied","Data":"1cf3d99765328eecf21ed05c6b15ee504d82ef7b2748c535fadf58b550590766"} Nov 25 11:54:14 crc kubenswrapper[4706]: I1125 11:54:14.917149 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 25 11:54:14 crc kubenswrapper[4706]: I1125 11:54:14.981108 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 25 11:54:15 crc kubenswrapper[4706]: I1125 11:54:15.932836 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"49e77cd2-5940-4ae6-9418-d069ce012ad7","Type":"ContainerStarted","Data":"05579a99a07b8e384e9b23909d135c0e25a71cd9669f331514856fd5fd7ee9e4"} Nov 25 11:54:15 crc kubenswrapper[4706]: I1125 11:54:15.934952 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"64ca6766-8491-40bc-a14e-eb866edf3fe8","Type":"ContainerStarted","Data":"0a4fa7f40651260ef96a4ba3f9a9fbb57868a77a9f057eda5521c56e048deed9"} Nov 25 11:54:15 crc kubenswrapper[4706]: I1125 11:54:15.958872 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=26.731680684 podStartE2EDuration="36.95885434s" podCreationTimestamp="2025-11-25 11:53:39 +0000 UTC" firstStartedPulling="2025-11-25 11:53:53.570396014 +0000 UTC m=+1042.484953395" lastFinishedPulling="2025-11-25 11:54:03.79756967 +0000 UTC m=+1052.712127051" observedRunningTime="2025-11-25 11:54:15.953914466 +0000 UTC m=+1064.868471847" watchObservedRunningTime="2025-11-25 11:54:15.95885434 +0000 UTC m=+1064.873411721" Nov 25 11:54:15 crc kubenswrapper[4706]: I1125 11:54:15.958956 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 25 11:54:15 crc kubenswrapper[4706]: I1125 11:54:15.981269 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=28.523812543 podStartE2EDuration="38.981246214s" podCreationTimestamp="2025-11-25 11:53:37 +0000 UTC" firstStartedPulling="2025-11-25 11:53:52.336182772 +0000 UTC m=+1041.250740153" lastFinishedPulling="2025-11-25 11:54:02.793616443 +0000 UTC m=+1051.708173824" observedRunningTime="2025-11-25 11:54:15.973593451 +0000 UTC m=+1064.888150832" watchObservedRunningTime="2025-11-25 11:54:15.981246214 +0000 UTC m=+1064.895803595" Nov 25 11:54:16 crc kubenswrapper[4706]: I1125 11:54:16.818790 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 25 11:54:16 crc kubenswrapper[4706]: I1125 11:54:16.859525 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 25 11:54:16 crc kubenswrapper[4706]: I1125 11:54:16.986188 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.172211 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 25 11:54:17 crc kubenswrapper[4706]: E1125 11:54:17.172636 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1f830dd-11b4-4ef5-bec1-796d0c51c8bb" containerName="init" Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.172656 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1f830dd-11b4-4ef5-bec1-796d0c51c8bb" containerName="init" Nov 25 11:54:17 crc kubenswrapper[4706]: E1125 11:54:17.172679 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1f830dd-11b4-4ef5-bec1-796d0c51c8bb" containerName="dnsmasq-dns" Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.172689 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1f830dd-11b4-4ef5-bec1-796d0c51c8bb" containerName="dnsmasq-dns" Nov 25 11:54:17 crc kubenswrapper[4706]: E1125 11:54:17.172707 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a9e827f-acb8-4b85-90f6-c5cd8634f430" containerName="init" Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.172716 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a9e827f-acb8-4b85-90f6-c5cd8634f430" containerName="init" Nov 25 11:54:17 crc kubenswrapper[4706]: E1125 11:54:17.172728 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a9e827f-acb8-4b85-90f6-c5cd8634f430" containerName="dnsmasq-dns" Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.172736 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a9e827f-acb8-4b85-90f6-c5cd8634f430" containerName="dnsmasq-dns" Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.172980 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a9e827f-acb8-4b85-90f6-c5cd8634f430" containerName="dnsmasq-dns" Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.172998 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1f830dd-11b4-4ef5-bec1-796d0c51c8bb" containerName="dnsmasq-dns" Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.173972 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.175995 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.176291 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.179426 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.180140 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-xmlmw" Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.184209 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.247566 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5gs6\" (UniqueName: \"kubernetes.io/projected/655006b1-956d-49e9-b15f-c00cd945c024-kube-api-access-k5gs6\") pod \"ovn-northd-0\" (UID: \"655006b1-956d-49e9-b15f-c00cd945c024\") " pod="openstack/ovn-northd-0" Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.247655 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/655006b1-956d-49e9-b15f-c00cd945c024-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"655006b1-956d-49e9-b15f-c00cd945c024\") " pod="openstack/ovn-northd-0" Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.247706 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/655006b1-956d-49e9-b15f-c00cd945c024-scripts\") pod \"ovn-northd-0\" (UID: \"655006b1-956d-49e9-b15f-c00cd945c024\") " pod="openstack/ovn-northd-0" Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.247736 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/655006b1-956d-49e9-b15f-c00cd945c024-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"655006b1-956d-49e9-b15f-c00cd945c024\") " pod="openstack/ovn-northd-0" Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.247814 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/655006b1-956d-49e9-b15f-c00cd945c024-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"655006b1-956d-49e9-b15f-c00cd945c024\") " pod="openstack/ovn-northd-0" Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.247897 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/655006b1-956d-49e9-b15f-c00cd945c024-config\") pod \"ovn-northd-0\" (UID: \"655006b1-956d-49e9-b15f-c00cd945c024\") " pod="openstack/ovn-northd-0" Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.247934 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/655006b1-956d-49e9-b15f-c00cd945c024-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"655006b1-956d-49e9-b15f-c00cd945c024\") " pod="openstack/ovn-northd-0" Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.349811 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/655006b1-956d-49e9-b15f-c00cd945c024-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"655006b1-956d-49e9-b15f-c00cd945c024\") " pod="openstack/ovn-northd-0" Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.349917 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/655006b1-956d-49e9-b15f-c00cd945c024-config\") pod \"ovn-northd-0\" (UID: \"655006b1-956d-49e9-b15f-c00cd945c024\") " pod="openstack/ovn-northd-0" Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.349944 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/655006b1-956d-49e9-b15f-c00cd945c024-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"655006b1-956d-49e9-b15f-c00cd945c024\") " pod="openstack/ovn-northd-0" Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.350015 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5gs6\" (UniqueName: \"kubernetes.io/projected/655006b1-956d-49e9-b15f-c00cd945c024-kube-api-access-k5gs6\") pod \"ovn-northd-0\" (UID: \"655006b1-956d-49e9-b15f-c00cd945c024\") " pod="openstack/ovn-northd-0" Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.350050 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/655006b1-956d-49e9-b15f-c00cd945c024-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"655006b1-956d-49e9-b15f-c00cd945c024\") " pod="openstack/ovn-northd-0" Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.350081 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/655006b1-956d-49e9-b15f-c00cd945c024-scripts\") pod \"ovn-northd-0\" (UID: \"655006b1-956d-49e9-b15f-c00cd945c024\") " pod="openstack/ovn-northd-0" Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.350104 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/655006b1-956d-49e9-b15f-c00cd945c024-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"655006b1-956d-49e9-b15f-c00cd945c024\") " pod="openstack/ovn-northd-0" Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.350567 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/655006b1-956d-49e9-b15f-c00cd945c024-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"655006b1-956d-49e9-b15f-c00cd945c024\") " pod="openstack/ovn-northd-0" Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.351117 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/655006b1-956d-49e9-b15f-c00cd945c024-config\") pod \"ovn-northd-0\" (UID: \"655006b1-956d-49e9-b15f-c00cd945c024\") " pod="openstack/ovn-northd-0" Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.351124 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/655006b1-956d-49e9-b15f-c00cd945c024-scripts\") pod \"ovn-northd-0\" (UID: \"655006b1-956d-49e9-b15f-c00cd945c024\") " pod="openstack/ovn-northd-0" Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.355951 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/655006b1-956d-49e9-b15f-c00cd945c024-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"655006b1-956d-49e9-b15f-c00cd945c024\") " pod="openstack/ovn-northd-0" Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.356770 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/655006b1-956d-49e9-b15f-c00cd945c024-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"655006b1-956d-49e9-b15f-c00cd945c024\") " pod="openstack/ovn-northd-0" Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.356803 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/655006b1-956d-49e9-b15f-c00cd945c024-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"655006b1-956d-49e9-b15f-c00cd945c024\") " pod="openstack/ovn-northd-0" Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.369151 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5gs6\" (UniqueName: \"kubernetes.io/projected/655006b1-956d-49e9-b15f-c00cd945c024-kube-api-access-k5gs6\") pod \"ovn-northd-0\" (UID: \"655006b1-956d-49e9-b15f-c00cd945c024\") " pod="openstack/ovn-northd-0" Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.493063 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.931935 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 25 11:54:17 crc kubenswrapper[4706]: W1125 11:54:17.934114 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod655006b1_956d_49e9_b15f_c00cd945c024.slice/crio-da00f47c2c7933f246da0ea22e3be4106d5b6a1ccc521c619f5a47edb44c79d5 WatchSource:0}: Error finding container da00f47c2c7933f246da0ea22e3be4106d5b6a1ccc521c619f5a47edb44c79d5: Status 404 returned error can't find the container with id da00f47c2c7933f246da0ea22e3be4106d5b6a1ccc521c619f5a47edb44c79d5 Nov 25 11:54:17 crc kubenswrapper[4706]: I1125 11:54:17.948870 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"655006b1-956d-49e9-b15f-c00cd945c024","Type":"ContainerStarted","Data":"da00f47c2c7933f246da0ea22e3be4106d5b6a1ccc521c619f5a47edb44c79d5"} Nov 25 11:54:19 crc kubenswrapper[4706]: I1125 11:54:19.188892 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 25 11:54:19 crc kubenswrapper[4706]: I1125 11:54:19.189156 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 25 11:54:19 crc kubenswrapper[4706]: I1125 11:54:19.829505 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f896c8c65-zk4cz" Nov 25 11:54:19 crc kubenswrapper[4706]: I1125 11:54:19.960274 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-zzvxf" Nov 25 11:54:19 crc kubenswrapper[4706]: I1125 11:54:19.967670 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"655006b1-956d-49e9-b15f-c00cd945c024","Type":"ContainerStarted","Data":"adf083763a6858506bc15ef5f2df9d2a058070ce8b29d004d86186cb46931ccf"} Nov 25 11:54:19 crc kubenswrapper[4706]: I1125 11:54:19.967724 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"655006b1-956d-49e9-b15f-c00cd945c024","Type":"ContainerStarted","Data":"2a0c34a35cfe973619d21f04786aaf4eb4c88ccd5c2770e152cca64af2c50896"} Nov 25 11:54:19 crc kubenswrapper[4706]: I1125 11:54:19.968532 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 25 11:54:20 crc kubenswrapper[4706]: I1125 11:54:20.022424 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-zk4cz"] Nov 25 11:54:20 crc kubenswrapper[4706]: I1125 11:54:20.022675 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f896c8c65-zk4cz" podUID="db081b9a-d1e3-42e7-904e-acb26e50cfd4" containerName="dnsmasq-dns" containerID="cri-o://32d8033a6ca940c84dba99e29ace5bd982b7f9f34b1abeb1de7ae072abb193a2" gracePeriod=10 Nov 25 11:54:20 crc kubenswrapper[4706]: I1125 11:54:20.025865 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.8080421549999999 podStartE2EDuration="3.025849533s" podCreationTimestamp="2025-11-25 11:54:17 +0000 UTC" firstStartedPulling="2025-11-25 11:54:17.936966183 +0000 UTC m=+1066.851523564" lastFinishedPulling="2025-11-25 11:54:19.154773561 +0000 UTC m=+1068.069330942" observedRunningTime="2025-11-25 11:54:20.013490582 +0000 UTC m=+1068.928047973" watchObservedRunningTime="2025-11-25 11:54:20.025849533 +0000 UTC m=+1068.940406914" Nov 25 11:54:20 crc kubenswrapper[4706]: I1125 11:54:20.502687 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-zk4cz" Nov 25 11:54:20 crc kubenswrapper[4706]: I1125 11:54:20.599791 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjdsj\" (UniqueName: \"kubernetes.io/projected/db081b9a-d1e3-42e7-904e-acb26e50cfd4-kube-api-access-vjdsj\") pod \"db081b9a-d1e3-42e7-904e-acb26e50cfd4\" (UID: \"db081b9a-d1e3-42e7-904e-acb26e50cfd4\") " Nov 25 11:54:20 crc kubenswrapper[4706]: I1125 11:54:20.599855 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db081b9a-d1e3-42e7-904e-acb26e50cfd4-dns-svc\") pod \"db081b9a-d1e3-42e7-904e-acb26e50cfd4\" (UID: \"db081b9a-d1e3-42e7-904e-acb26e50cfd4\") " Nov 25 11:54:20 crc kubenswrapper[4706]: I1125 11:54:20.599883 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db081b9a-d1e3-42e7-904e-acb26e50cfd4-ovsdbserver-sb\") pod \"db081b9a-d1e3-42e7-904e-acb26e50cfd4\" (UID: \"db081b9a-d1e3-42e7-904e-acb26e50cfd4\") " Nov 25 11:54:20 crc kubenswrapper[4706]: I1125 11:54:20.599951 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db081b9a-d1e3-42e7-904e-acb26e50cfd4-config\") pod \"db081b9a-d1e3-42e7-904e-acb26e50cfd4\" (UID: \"db081b9a-d1e3-42e7-904e-acb26e50cfd4\") " Nov 25 11:54:20 crc kubenswrapper[4706]: I1125 11:54:20.605163 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db081b9a-d1e3-42e7-904e-acb26e50cfd4-kube-api-access-vjdsj" (OuterVolumeSpecName: "kube-api-access-vjdsj") pod "db081b9a-d1e3-42e7-904e-acb26e50cfd4" (UID: "db081b9a-d1e3-42e7-904e-acb26e50cfd4"). InnerVolumeSpecName "kube-api-access-vjdsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:54:20 crc kubenswrapper[4706]: I1125 11:54:20.645185 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db081b9a-d1e3-42e7-904e-acb26e50cfd4-config" (OuterVolumeSpecName: "config") pod "db081b9a-d1e3-42e7-904e-acb26e50cfd4" (UID: "db081b9a-d1e3-42e7-904e-acb26e50cfd4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:54:20 crc kubenswrapper[4706]: I1125 11:54:20.645610 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db081b9a-d1e3-42e7-904e-acb26e50cfd4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "db081b9a-d1e3-42e7-904e-acb26e50cfd4" (UID: "db081b9a-d1e3-42e7-904e-acb26e50cfd4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:54:20 crc kubenswrapper[4706]: I1125 11:54:20.653158 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db081b9a-d1e3-42e7-904e-acb26e50cfd4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "db081b9a-d1e3-42e7-904e-acb26e50cfd4" (UID: "db081b9a-d1e3-42e7-904e-acb26e50cfd4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:54:20 crc kubenswrapper[4706]: I1125 11:54:20.670845 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 25 11:54:20 crc kubenswrapper[4706]: I1125 11:54:20.670904 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 25 11:54:20 crc kubenswrapper[4706]: I1125 11:54:20.701706 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjdsj\" (UniqueName: \"kubernetes.io/projected/db081b9a-d1e3-42e7-904e-acb26e50cfd4-kube-api-access-vjdsj\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:20 crc kubenswrapper[4706]: I1125 11:54:20.701751 4706 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db081b9a-d1e3-42e7-904e-acb26e50cfd4-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:20 crc kubenswrapper[4706]: I1125 11:54:20.701766 4706 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db081b9a-d1e3-42e7-904e-acb26e50cfd4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:20 crc kubenswrapper[4706]: I1125 11:54:20.701779 4706 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db081b9a-d1e3-42e7-904e-acb26e50cfd4-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:20 crc kubenswrapper[4706]: I1125 11:54:20.976392 4706 generic.go:334] "Generic (PLEG): container finished" podID="db081b9a-d1e3-42e7-904e-acb26e50cfd4" containerID="32d8033a6ca940c84dba99e29ace5bd982b7f9f34b1abeb1de7ae072abb193a2" exitCode=0 Nov 25 11:54:20 crc kubenswrapper[4706]: I1125 11:54:20.976465 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-zk4cz" Nov 25 11:54:20 crc kubenswrapper[4706]: I1125 11:54:20.976464 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-zk4cz" event={"ID":"db081b9a-d1e3-42e7-904e-acb26e50cfd4","Type":"ContainerDied","Data":"32d8033a6ca940c84dba99e29ace5bd982b7f9f34b1abeb1de7ae072abb193a2"} Nov 25 11:54:20 crc kubenswrapper[4706]: I1125 11:54:20.976590 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-zk4cz" event={"ID":"db081b9a-d1e3-42e7-904e-acb26e50cfd4","Type":"ContainerDied","Data":"2384c4b1dbd69f0c13718c7ccfddd25288f89ed14f136bcac014948df2e4aa2b"} Nov 25 11:54:20 crc kubenswrapper[4706]: I1125 11:54:20.976611 4706 scope.go:117] "RemoveContainer" containerID="32d8033a6ca940c84dba99e29ace5bd982b7f9f34b1abeb1de7ae072abb193a2" Nov 25 11:54:20 crc kubenswrapper[4706]: I1125 11:54:20.997538 4706 scope.go:117] "RemoveContainer" containerID="d799a3d852ad28f81b6d19d958c2c9410e9f4cc7ff83c5edfc98077eccc1778a" Nov 25 11:54:21 crc kubenswrapper[4706]: I1125 11:54:21.015823 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-zk4cz"] Nov 25 11:54:21 crc kubenswrapper[4706]: I1125 11:54:21.026101 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-zk4cz"] Nov 25 11:54:21 crc kubenswrapper[4706]: I1125 11:54:21.029583 4706 scope.go:117] "RemoveContainer" containerID="32d8033a6ca940c84dba99e29ace5bd982b7f9f34b1abeb1de7ae072abb193a2" Nov 25 11:54:21 crc kubenswrapper[4706]: E1125 11:54:21.030158 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32d8033a6ca940c84dba99e29ace5bd982b7f9f34b1abeb1de7ae072abb193a2\": container with ID starting with 32d8033a6ca940c84dba99e29ace5bd982b7f9f34b1abeb1de7ae072abb193a2 not found: ID does not exist" containerID="32d8033a6ca940c84dba99e29ace5bd982b7f9f34b1abeb1de7ae072abb193a2" Nov 25 11:54:21 crc kubenswrapper[4706]: I1125 11:54:21.030191 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32d8033a6ca940c84dba99e29ace5bd982b7f9f34b1abeb1de7ae072abb193a2"} err="failed to get container status \"32d8033a6ca940c84dba99e29ace5bd982b7f9f34b1abeb1de7ae072abb193a2\": rpc error: code = NotFound desc = could not find container \"32d8033a6ca940c84dba99e29ace5bd982b7f9f34b1abeb1de7ae072abb193a2\": container with ID starting with 32d8033a6ca940c84dba99e29ace5bd982b7f9f34b1abeb1de7ae072abb193a2 not found: ID does not exist" Nov 25 11:54:21 crc kubenswrapper[4706]: I1125 11:54:21.030213 4706 scope.go:117] "RemoveContainer" containerID="d799a3d852ad28f81b6d19d958c2c9410e9f4cc7ff83c5edfc98077eccc1778a" Nov 25 11:54:21 crc kubenswrapper[4706]: E1125 11:54:21.030550 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d799a3d852ad28f81b6d19d958c2c9410e9f4cc7ff83c5edfc98077eccc1778a\": container with ID starting with d799a3d852ad28f81b6d19d958c2c9410e9f4cc7ff83c5edfc98077eccc1778a not found: ID does not exist" containerID="d799a3d852ad28f81b6d19d958c2c9410e9f4cc7ff83c5edfc98077eccc1778a" Nov 25 11:54:21 crc kubenswrapper[4706]: I1125 11:54:21.030573 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d799a3d852ad28f81b6d19d958c2c9410e9f4cc7ff83c5edfc98077eccc1778a"} err="failed to get container status \"d799a3d852ad28f81b6d19d958c2c9410e9f4cc7ff83c5edfc98077eccc1778a\": rpc error: code = NotFound desc = could not find container \"d799a3d852ad28f81b6d19d958c2c9410e9f4cc7ff83c5edfc98077eccc1778a\": container with ID starting with d799a3d852ad28f81b6d19d958c2c9410e9f4cc7ff83c5edfc98077eccc1778a not found: ID does not exist" Nov 25 11:54:21 crc kubenswrapper[4706]: I1125 11:54:21.377074 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 25 11:54:21 crc kubenswrapper[4706]: I1125 11:54:21.471091 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 25 11:54:21 crc kubenswrapper[4706]: I1125 11:54:21.934601 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db081b9a-d1e3-42e7-904e-acb26e50cfd4" path="/var/lib/kubelet/pods/db081b9a-d1e3-42e7-904e-acb26e50cfd4/volumes" Nov 25 11:54:22 crc kubenswrapper[4706]: I1125 11:54:22.850761 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 25 11:54:22 crc kubenswrapper[4706]: I1125 11:54:22.983560 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 25 11:54:22 crc kubenswrapper[4706]: I1125 11:54:22.989527 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-vjh52"] Nov 25 11:54:22 crc kubenswrapper[4706]: E1125 11:54:22.989850 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db081b9a-d1e3-42e7-904e-acb26e50cfd4" containerName="init" Nov 25 11:54:22 crc kubenswrapper[4706]: I1125 11:54:22.989867 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="db081b9a-d1e3-42e7-904e-acb26e50cfd4" containerName="init" Nov 25 11:54:22 crc kubenswrapper[4706]: E1125 11:54:22.989895 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db081b9a-d1e3-42e7-904e-acb26e50cfd4" containerName="dnsmasq-dns" Nov 25 11:54:22 crc kubenswrapper[4706]: I1125 11:54:22.989903 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="db081b9a-d1e3-42e7-904e-acb26e50cfd4" containerName="dnsmasq-dns" Nov 25 11:54:22 crc kubenswrapper[4706]: I1125 11:54:22.990105 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="db081b9a-d1e3-42e7-904e-acb26e50cfd4" containerName="dnsmasq-dns" Nov 25 11:54:22 crc kubenswrapper[4706]: I1125 11:54:22.990980 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-vjh52" Nov 25 11:54:23 crc kubenswrapper[4706]: I1125 11:54:23.004097 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-vjh52"] Nov 25 11:54:23 crc kubenswrapper[4706]: I1125 11:54:23.056164 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxpz2\" (UniqueName: \"kubernetes.io/projected/679831d3-04d7-4b95-8690-837698ce07f3-kube-api-access-dxpz2\") pod \"dnsmasq-dns-698758b865-vjh52\" (UID: \"679831d3-04d7-4b95-8690-837698ce07f3\") " pod="openstack/dnsmasq-dns-698758b865-vjh52" Nov 25 11:54:23 crc kubenswrapper[4706]: I1125 11:54:23.056227 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/679831d3-04d7-4b95-8690-837698ce07f3-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-vjh52\" (UID: \"679831d3-04d7-4b95-8690-837698ce07f3\") " pod="openstack/dnsmasq-dns-698758b865-vjh52" Nov 25 11:54:23 crc kubenswrapper[4706]: I1125 11:54:23.056282 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/679831d3-04d7-4b95-8690-837698ce07f3-dns-svc\") pod \"dnsmasq-dns-698758b865-vjh52\" (UID: \"679831d3-04d7-4b95-8690-837698ce07f3\") " pod="openstack/dnsmasq-dns-698758b865-vjh52" Nov 25 11:54:23 crc kubenswrapper[4706]: I1125 11:54:23.056352 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/679831d3-04d7-4b95-8690-837698ce07f3-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-vjh52\" (UID: \"679831d3-04d7-4b95-8690-837698ce07f3\") " pod="openstack/dnsmasq-dns-698758b865-vjh52" Nov 25 11:54:23 crc kubenswrapper[4706]: I1125 11:54:23.056446 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/679831d3-04d7-4b95-8690-837698ce07f3-config\") pod \"dnsmasq-dns-698758b865-vjh52\" (UID: \"679831d3-04d7-4b95-8690-837698ce07f3\") " pod="openstack/dnsmasq-dns-698758b865-vjh52" Nov 25 11:54:23 crc kubenswrapper[4706]: I1125 11:54:23.081389 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 25 11:54:23 crc kubenswrapper[4706]: I1125 11:54:23.158462 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxpz2\" (UniqueName: \"kubernetes.io/projected/679831d3-04d7-4b95-8690-837698ce07f3-kube-api-access-dxpz2\") pod \"dnsmasq-dns-698758b865-vjh52\" (UID: \"679831d3-04d7-4b95-8690-837698ce07f3\") " pod="openstack/dnsmasq-dns-698758b865-vjh52" Nov 25 11:54:23 crc kubenswrapper[4706]: I1125 11:54:23.158700 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/679831d3-04d7-4b95-8690-837698ce07f3-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-vjh52\" (UID: \"679831d3-04d7-4b95-8690-837698ce07f3\") " pod="openstack/dnsmasq-dns-698758b865-vjh52" Nov 25 11:54:23 crc kubenswrapper[4706]: I1125 11:54:23.159275 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/679831d3-04d7-4b95-8690-837698ce07f3-dns-svc\") pod \"dnsmasq-dns-698758b865-vjh52\" (UID: \"679831d3-04d7-4b95-8690-837698ce07f3\") " pod="openstack/dnsmasq-dns-698758b865-vjh52" Nov 25 11:54:23 crc kubenswrapper[4706]: I1125 11:54:23.159402 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/679831d3-04d7-4b95-8690-837698ce07f3-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-vjh52\" (UID: \"679831d3-04d7-4b95-8690-837698ce07f3\") " pod="openstack/dnsmasq-dns-698758b865-vjh52" Nov 25 11:54:23 crc kubenswrapper[4706]: I1125 11:54:23.159795 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/679831d3-04d7-4b95-8690-837698ce07f3-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-vjh52\" (UID: \"679831d3-04d7-4b95-8690-837698ce07f3\") " pod="openstack/dnsmasq-dns-698758b865-vjh52" Nov 25 11:54:23 crc kubenswrapper[4706]: I1125 11:54:23.159829 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/679831d3-04d7-4b95-8690-837698ce07f3-dns-svc\") pod \"dnsmasq-dns-698758b865-vjh52\" (UID: \"679831d3-04d7-4b95-8690-837698ce07f3\") " pod="openstack/dnsmasq-dns-698758b865-vjh52" Nov 25 11:54:23 crc kubenswrapper[4706]: I1125 11:54:23.160268 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/679831d3-04d7-4b95-8690-837698ce07f3-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-vjh52\" (UID: \"679831d3-04d7-4b95-8690-837698ce07f3\") " pod="openstack/dnsmasq-dns-698758b865-vjh52" Nov 25 11:54:23 crc kubenswrapper[4706]: I1125 11:54:23.160489 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/679831d3-04d7-4b95-8690-837698ce07f3-config\") pod \"dnsmasq-dns-698758b865-vjh52\" (UID: \"679831d3-04d7-4b95-8690-837698ce07f3\") " pod="openstack/dnsmasq-dns-698758b865-vjh52" Nov 25 11:54:23 crc kubenswrapper[4706]: I1125 11:54:23.161139 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/679831d3-04d7-4b95-8690-837698ce07f3-config\") pod \"dnsmasq-dns-698758b865-vjh52\" (UID: \"679831d3-04d7-4b95-8690-837698ce07f3\") " pod="openstack/dnsmasq-dns-698758b865-vjh52" Nov 25 11:54:23 crc kubenswrapper[4706]: I1125 11:54:23.184502 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxpz2\" (UniqueName: \"kubernetes.io/projected/679831d3-04d7-4b95-8690-837698ce07f3-kube-api-access-dxpz2\") pod \"dnsmasq-dns-698758b865-vjh52\" (UID: \"679831d3-04d7-4b95-8690-837698ce07f3\") " pod="openstack/dnsmasq-dns-698758b865-vjh52" Nov 25 11:54:23 crc kubenswrapper[4706]: I1125 11:54:23.307565 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-vjh52" Nov 25 11:54:23 crc kubenswrapper[4706]: W1125 11:54:23.763470 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod679831d3_04d7_4b95_8690_837698ce07f3.slice/crio-c3f6a5f679463ef34538b9dab611b7a615c6fdd9f040ca1556fec027f2e42735 WatchSource:0}: Error finding container c3f6a5f679463ef34538b9dab611b7a615c6fdd9f040ca1556fec027f2e42735: Status 404 returned error can't find the container with id c3f6a5f679463ef34538b9dab611b7a615c6fdd9f040ca1556fec027f2e42735 Nov 25 11:54:23 crc kubenswrapper[4706]: I1125 11:54:23.763615 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-vjh52"] Nov 25 11:54:23 crc kubenswrapper[4706]: I1125 11:54:23.999830 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-vjh52" event={"ID":"679831d3-04d7-4b95-8690-837698ce07f3","Type":"ContainerStarted","Data":"83dad321de8f13a6f3ba95b0c99abee0113e3a4da07314955a6416398af6f575"} Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:23.999879 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-vjh52" event={"ID":"679831d3-04d7-4b95-8690-837698ce07f3","Type":"ContainerStarted","Data":"c3f6a5f679463ef34538b9dab611b7a615c6fdd9f040ca1556fec027f2e42735"} Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.154544 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.174228 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.174571 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.201951 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.202238 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-gts76" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.202417 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.203923 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.301611 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9225b01e-1067-47de-812a-d9be36adf9d0-etc-swift\") pod \"swift-storage-0\" (UID: \"9225b01e-1067-47de-812a-d9be36adf9d0\") " pod="openstack/swift-storage-0" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.302028 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"9225b01e-1067-47de-812a-d9be36adf9d0\") " pod="openstack/swift-storage-0" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.302103 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9225b01e-1067-47de-812a-d9be36adf9d0-cache\") pod \"swift-storage-0\" (UID: \"9225b01e-1067-47de-812a-d9be36adf9d0\") " pod="openstack/swift-storage-0" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.302137 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9225b01e-1067-47de-812a-d9be36adf9d0-lock\") pod \"swift-storage-0\" (UID: \"9225b01e-1067-47de-812a-d9be36adf9d0\") " pod="openstack/swift-storage-0" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.302162 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsp68\" (UniqueName: \"kubernetes.io/projected/9225b01e-1067-47de-812a-d9be36adf9d0-kube-api-access-zsp68\") pod \"swift-storage-0\" (UID: \"9225b01e-1067-47de-812a-d9be36adf9d0\") " pod="openstack/swift-storage-0" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.404122 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"9225b01e-1067-47de-812a-d9be36adf9d0\") " pod="openstack/swift-storage-0" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.404227 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9225b01e-1067-47de-812a-d9be36adf9d0-cache\") pod \"swift-storage-0\" (UID: \"9225b01e-1067-47de-812a-d9be36adf9d0\") " pod="openstack/swift-storage-0" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.404262 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9225b01e-1067-47de-812a-d9be36adf9d0-lock\") pod \"swift-storage-0\" (UID: \"9225b01e-1067-47de-812a-d9be36adf9d0\") " pod="openstack/swift-storage-0" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.404314 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsp68\" (UniqueName: \"kubernetes.io/projected/9225b01e-1067-47de-812a-d9be36adf9d0-kube-api-access-zsp68\") pod \"swift-storage-0\" (UID: \"9225b01e-1067-47de-812a-d9be36adf9d0\") " pod="openstack/swift-storage-0" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.404352 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9225b01e-1067-47de-812a-d9be36adf9d0-etc-swift\") pod \"swift-storage-0\" (UID: \"9225b01e-1067-47de-812a-d9be36adf9d0\") " pod="openstack/swift-storage-0" Nov 25 11:54:24 crc kubenswrapper[4706]: E1125 11:54:24.404543 4706 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 11:54:24 crc kubenswrapper[4706]: E1125 11:54:24.404559 4706 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 11:54:24 crc kubenswrapper[4706]: E1125 11:54:24.404618 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9225b01e-1067-47de-812a-d9be36adf9d0-etc-swift podName:9225b01e-1067-47de-812a-d9be36adf9d0 nodeName:}" failed. No retries permitted until 2025-11-25 11:54:24.904597212 +0000 UTC m=+1073.819154593 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9225b01e-1067-47de-812a-d9be36adf9d0-etc-swift") pod "swift-storage-0" (UID: "9225b01e-1067-47de-812a-d9be36adf9d0") : configmap "swift-ring-files" not found Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.404732 4706 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"9225b01e-1067-47de-812a-d9be36adf9d0\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/swift-storage-0" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.404990 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9225b01e-1067-47de-812a-d9be36adf9d0-lock\") pod \"swift-storage-0\" (UID: \"9225b01e-1067-47de-812a-d9be36adf9d0\") " pod="openstack/swift-storage-0" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.405185 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9225b01e-1067-47de-812a-d9be36adf9d0-cache\") pod \"swift-storage-0\" (UID: \"9225b01e-1067-47de-812a-d9be36adf9d0\") " pod="openstack/swift-storage-0" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.420723 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsp68\" (UniqueName: \"kubernetes.io/projected/9225b01e-1067-47de-812a-d9be36adf9d0-kube-api-access-zsp68\") pod \"swift-storage-0\" (UID: \"9225b01e-1067-47de-812a-d9be36adf9d0\") " pod="openstack/swift-storage-0" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.427716 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"9225b01e-1067-47de-812a-d9be36adf9d0\") " pod="openstack/swift-storage-0" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.669000 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-c75mt"] Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.670423 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-c75mt" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.678913 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.680061 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.682434 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.688244 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-c75mt"] Nov 25 11:54:24 crc kubenswrapper[4706]: E1125 11:54:24.691834 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-78jjq ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-78jjq ring-data-devices scripts swiftconf]: context canceled" pod="openstack/swift-ring-rebalance-c75mt" podUID="a4318c64-8dbc-4c7e-94cd-05ee65d699e1" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.706914 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-ww65d"] Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.708686 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ww65d" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.729002 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-ww65d"] Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.773508 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-c75mt"] Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.810831 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/687ee889-8ec7-4754-b45f-b0f087368a37-combined-ca-bundle\") pod \"swift-ring-rebalance-ww65d\" (UID: \"687ee889-8ec7-4754-b45f-b0f087368a37\") " pod="openstack/swift-ring-rebalance-ww65d" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.810912 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-swiftconf\") pod \"swift-ring-rebalance-c75mt\" (UID: \"a4318c64-8dbc-4c7e-94cd-05ee65d699e1\") " pod="openstack/swift-ring-rebalance-c75mt" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.810942 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/687ee889-8ec7-4754-b45f-b0f087368a37-swiftconf\") pod \"swift-ring-rebalance-ww65d\" (UID: \"687ee889-8ec7-4754-b45f-b0f087368a37\") " pod="openstack/swift-ring-rebalance-ww65d" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.811041 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/687ee889-8ec7-4754-b45f-b0f087368a37-etc-swift\") pod \"swift-ring-rebalance-ww65d\" (UID: \"687ee889-8ec7-4754-b45f-b0f087368a37\") " pod="openstack/swift-ring-rebalance-ww65d" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.811076 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-scripts\") pod \"swift-ring-rebalance-c75mt\" (UID: \"a4318c64-8dbc-4c7e-94cd-05ee65d699e1\") " pod="openstack/swift-ring-rebalance-c75mt" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.811104 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-combined-ca-bundle\") pod \"swift-ring-rebalance-c75mt\" (UID: \"a4318c64-8dbc-4c7e-94cd-05ee65d699e1\") " pod="openstack/swift-ring-rebalance-c75mt" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.811125 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-ring-data-devices\") pod \"swift-ring-rebalance-c75mt\" (UID: \"a4318c64-8dbc-4c7e-94cd-05ee65d699e1\") " pod="openstack/swift-ring-rebalance-c75mt" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.811167 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78jjq\" (UniqueName: \"kubernetes.io/projected/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-kube-api-access-78jjq\") pod \"swift-ring-rebalance-c75mt\" (UID: \"a4318c64-8dbc-4c7e-94cd-05ee65d699e1\") " pod="openstack/swift-ring-rebalance-c75mt" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.811201 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-etc-swift\") pod \"swift-ring-rebalance-c75mt\" (UID: \"a4318c64-8dbc-4c7e-94cd-05ee65d699e1\") " pod="openstack/swift-ring-rebalance-c75mt" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.811231 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-dispersionconf\") pod \"swift-ring-rebalance-c75mt\" (UID: \"a4318c64-8dbc-4c7e-94cd-05ee65d699e1\") " pod="openstack/swift-ring-rebalance-c75mt" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.811256 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/687ee889-8ec7-4754-b45f-b0f087368a37-scripts\") pod \"swift-ring-rebalance-ww65d\" (UID: \"687ee889-8ec7-4754-b45f-b0f087368a37\") " pod="openstack/swift-ring-rebalance-ww65d" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.811294 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk5hz\" (UniqueName: \"kubernetes.io/projected/687ee889-8ec7-4754-b45f-b0f087368a37-kube-api-access-lk5hz\") pod \"swift-ring-rebalance-ww65d\" (UID: \"687ee889-8ec7-4754-b45f-b0f087368a37\") " pod="openstack/swift-ring-rebalance-ww65d" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.811337 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/687ee889-8ec7-4754-b45f-b0f087368a37-ring-data-devices\") pod \"swift-ring-rebalance-ww65d\" (UID: \"687ee889-8ec7-4754-b45f-b0f087368a37\") " pod="openstack/swift-ring-rebalance-ww65d" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.811451 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/687ee889-8ec7-4754-b45f-b0f087368a37-dispersionconf\") pod \"swift-ring-rebalance-ww65d\" (UID: \"687ee889-8ec7-4754-b45f-b0f087368a37\") " pod="openstack/swift-ring-rebalance-ww65d" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.912957 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/687ee889-8ec7-4754-b45f-b0f087368a37-ring-data-devices\") pod \"swift-ring-rebalance-ww65d\" (UID: \"687ee889-8ec7-4754-b45f-b0f087368a37\") " pod="openstack/swift-ring-rebalance-ww65d" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.913018 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/687ee889-8ec7-4754-b45f-b0f087368a37-dispersionconf\") pod \"swift-ring-rebalance-ww65d\" (UID: \"687ee889-8ec7-4754-b45f-b0f087368a37\") " pod="openstack/swift-ring-rebalance-ww65d" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.913058 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/687ee889-8ec7-4754-b45f-b0f087368a37-combined-ca-bundle\") pod \"swift-ring-rebalance-ww65d\" (UID: \"687ee889-8ec7-4754-b45f-b0f087368a37\") " pod="openstack/swift-ring-rebalance-ww65d" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.913084 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-swiftconf\") pod \"swift-ring-rebalance-c75mt\" (UID: \"a4318c64-8dbc-4c7e-94cd-05ee65d699e1\") " pod="openstack/swift-ring-rebalance-c75mt" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.913102 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/687ee889-8ec7-4754-b45f-b0f087368a37-swiftconf\") pod \"swift-ring-rebalance-ww65d\" (UID: \"687ee889-8ec7-4754-b45f-b0f087368a37\") " pod="openstack/swift-ring-rebalance-ww65d" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.913140 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9225b01e-1067-47de-812a-d9be36adf9d0-etc-swift\") pod \"swift-storage-0\" (UID: \"9225b01e-1067-47de-812a-d9be36adf9d0\") " pod="openstack/swift-storage-0" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.913161 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/687ee889-8ec7-4754-b45f-b0f087368a37-etc-swift\") pod \"swift-ring-rebalance-ww65d\" (UID: \"687ee889-8ec7-4754-b45f-b0f087368a37\") " pod="openstack/swift-ring-rebalance-ww65d" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.913184 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-scripts\") pod \"swift-ring-rebalance-c75mt\" (UID: \"a4318c64-8dbc-4c7e-94cd-05ee65d699e1\") " pod="openstack/swift-ring-rebalance-c75mt" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.913202 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-combined-ca-bundle\") pod \"swift-ring-rebalance-c75mt\" (UID: \"a4318c64-8dbc-4c7e-94cd-05ee65d699e1\") " pod="openstack/swift-ring-rebalance-c75mt" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.913221 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-ring-data-devices\") pod \"swift-ring-rebalance-c75mt\" (UID: \"a4318c64-8dbc-4c7e-94cd-05ee65d699e1\") " pod="openstack/swift-ring-rebalance-c75mt" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.913239 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78jjq\" (UniqueName: \"kubernetes.io/projected/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-kube-api-access-78jjq\") pod \"swift-ring-rebalance-c75mt\" (UID: \"a4318c64-8dbc-4c7e-94cd-05ee65d699e1\") " pod="openstack/swift-ring-rebalance-c75mt" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.913265 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-etc-swift\") pod \"swift-ring-rebalance-c75mt\" (UID: \"a4318c64-8dbc-4c7e-94cd-05ee65d699e1\") " pod="openstack/swift-ring-rebalance-c75mt" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.913287 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-dispersionconf\") pod \"swift-ring-rebalance-c75mt\" (UID: \"a4318c64-8dbc-4c7e-94cd-05ee65d699e1\") " pod="openstack/swift-ring-rebalance-c75mt" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.913318 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/687ee889-8ec7-4754-b45f-b0f087368a37-scripts\") pod \"swift-ring-rebalance-ww65d\" (UID: \"687ee889-8ec7-4754-b45f-b0f087368a37\") " pod="openstack/swift-ring-rebalance-ww65d" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.913347 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk5hz\" (UniqueName: \"kubernetes.io/projected/687ee889-8ec7-4754-b45f-b0f087368a37-kube-api-access-lk5hz\") pod \"swift-ring-rebalance-ww65d\" (UID: \"687ee889-8ec7-4754-b45f-b0f087368a37\") " pod="openstack/swift-ring-rebalance-ww65d" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.914214 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/687ee889-8ec7-4754-b45f-b0f087368a37-ring-data-devices\") pod \"swift-ring-rebalance-ww65d\" (UID: \"687ee889-8ec7-4754-b45f-b0f087368a37\") " pod="openstack/swift-ring-rebalance-ww65d" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.915157 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-scripts\") pod \"swift-ring-rebalance-c75mt\" (UID: \"a4318c64-8dbc-4c7e-94cd-05ee65d699e1\") " pod="openstack/swift-ring-rebalance-c75mt" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.915416 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/687ee889-8ec7-4754-b45f-b0f087368a37-etc-swift\") pod \"swift-ring-rebalance-ww65d\" (UID: \"687ee889-8ec7-4754-b45f-b0f087368a37\") " pod="openstack/swift-ring-rebalance-ww65d" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.915945 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-etc-swift\") pod \"swift-ring-rebalance-c75mt\" (UID: \"a4318c64-8dbc-4c7e-94cd-05ee65d699e1\") " pod="openstack/swift-ring-rebalance-c75mt" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.916489 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/687ee889-8ec7-4754-b45f-b0f087368a37-scripts\") pod \"swift-ring-rebalance-ww65d\" (UID: \"687ee889-8ec7-4754-b45f-b0f087368a37\") " pod="openstack/swift-ring-rebalance-ww65d" Nov 25 11:54:24 crc kubenswrapper[4706]: E1125 11:54:24.916611 4706 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 11:54:24 crc kubenswrapper[4706]: E1125 11:54:24.916627 4706 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 11:54:24 crc kubenswrapper[4706]: E1125 11:54:24.916667 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9225b01e-1067-47de-812a-d9be36adf9d0-etc-swift podName:9225b01e-1067-47de-812a-d9be36adf9d0 nodeName:}" failed. No retries permitted until 2025-11-25 11:54:25.916653669 +0000 UTC m=+1074.831211050 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9225b01e-1067-47de-812a-d9be36adf9d0-etc-swift") pod "swift-storage-0" (UID: "9225b01e-1067-47de-812a-d9be36adf9d0") : configmap "swift-ring-files" not found Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.916716 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-ring-data-devices\") pod \"swift-ring-rebalance-c75mt\" (UID: \"a4318c64-8dbc-4c7e-94cd-05ee65d699e1\") " pod="openstack/swift-ring-rebalance-c75mt" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.920163 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/687ee889-8ec7-4754-b45f-b0f087368a37-swiftconf\") pod \"swift-ring-rebalance-ww65d\" (UID: \"687ee889-8ec7-4754-b45f-b0f087368a37\") " pod="openstack/swift-ring-rebalance-ww65d" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.924180 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-combined-ca-bundle\") pod \"swift-ring-rebalance-c75mt\" (UID: \"a4318c64-8dbc-4c7e-94cd-05ee65d699e1\") " pod="openstack/swift-ring-rebalance-c75mt" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.926727 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-swiftconf\") pod \"swift-ring-rebalance-c75mt\" (UID: \"a4318c64-8dbc-4c7e-94cd-05ee65d699e1\") " pod="openstack/swift-ring-rebalance-c75mt" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.926825 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-dispersionconf\") pod \"swift-ring-rebalance-c75mt\" (UID: \"a4318c64-8dbc-4c7e-94cd-05ee65d699e1\") " pod="openstack/swift-ring-rebalance-c75mt" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.927120 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/687ee889-8ec7-4754-b45f-b0f087368a37-dispersionconf\") pod \"swift-ring-rebalance-ww65d\" (UID: \"687ee889-8ec7-4754-b45f-b0f087368a37\") " pod="openstack/swift-ring-rebalance-ww65d" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.945125 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/687ee889-8ec7-4754-b45f-b0f087368a37-combined-ca-bundle\") pod \"swift-ring-rebalance-ww65d\" (UID: \"687ee889-8ec7-4754-b45f-b0f087368a37\") " pod="openstack/swift-ring-rebalance-ww65d" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.945872 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk5hz\" (UniqueName: \"kubernetes.io/projected/687ee889-8ec7-4754-b45f-b0f087368a37-kube-api-access-lk5hz\") pod \"swift-ring-rebalance-ww65d\" (UID: \"687ee889-8ec7-4754-b45f-b0f087368a37\") " pod="openstack/swift-ring-rebalance-ww65d" Nov 25 11:54:24 crc kubenswrapper[4706]: I1125 11:54:24.951887 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78jjq\" (UniqueName: \"kubernetes.io/projected/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-kube-api-access-78jjq\") pod \"swift-ring-rebalance-c75mt\" (UID: \"a4318c64-8dbc-4c7e-94cd-05ee65d699e1\") " pod="openstack/swift-ring-rebalance-c75mt" Nov 25 11:54:25 crc kubenswrapper[4706]: I1125 11:54:25.008228 4706 generic.go:334] "Generic (PLEG): container finished" podID="679831d3-04d7-4b95-8690-837698ce07f3" containerID="83dad321de8f13a6f3ba95b0c99abee0113e3a4da07314955a6416398af6f575" exitCode=0 Nov 25 11:54:25 crc kubenswrapper[4706]: I1125 11:54:25.008317 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-c75mt" Nov 25 11:54:25 crc kubenswrapper[4706]: I1125 11:54:25.009002 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-vjh52" event={"ID":"679831d3-04d7-4b95-8690-837698ce07f3","Type":"ContainerDied","Data":"83dad321de8f13a6f3ba95b0c99abee0113e3a4da07314955a6416398af6f575"} Nov 25 11:54:25 crc kubenswrapper[4706]: I1125 11:54:25.024265 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ww65d" Nov 25 11:54:25 crc kubenswrapper[4706]: I1125 11:54:25.024396 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-c75mt" Nov 25 11:54:25 crc kubenswrapper[4706]: I1125 11:54:25.218106 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-dispersionconf\") pod \"a4318c64-8dbc-4c7e-94cd-05ee65d699e1\" (UID: \"a4318c64-8dbc-4c7e-94cd-05ee65d699e1\") " Nov 25 11:54:25 crc kubenswrapper[4706]: I1125 11:54:25.219520 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-swiftconf\") pod \"a4318c64-8dbc-4c7e-94cd-05ee65d699e1\" (UID: \"a4318c64-8dbc-4c7e-94cd-05ee65d699e1\") " Nov 25 11:54:25 crc kubenswrapper[4706]: I1125 11:54:25.219636 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-combined-ca-bundle\") pod \"a4318c64-8dbc-4c7e-94cd-05ee65d699e1\" (UID: \"a4318c64-8dbc-4c7e-94cd-05ee65d699e1\") " Nov 25 11:54:25 crc kubenswrapper[4706]: I1125 11:54:25.219732 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-etc-swift\") pod \"a4318c64-8dbc-4c7e-94cd-05ee65d699e1\" (UID: \"a4318c64-8dbc-4c7e-94cd-05ee65d699e1\") " Nov 25 11:54:25 crc kubenswrapper[4706]: I1125 11:54:25.219774 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78jjq\" (UniqueName: \"kubernetes.io/projected/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-kube-api-access-78jjq\") pod \"a4318c64-8dbc-4c7e-94cd-05ee65d699e1\" (UID: \"a4318c64-8dbc-4c7e-94cd-05ee65d699e1\") " Nov 25 11:54:25 crc kubenswrapper[4706]: I1125 11:54:25.219817 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-ring-data-devices\") pod \"a4318c64-8dbc-4c7e-94cd-05ee65d699e1\" (UID: \"a4318c64-8dbc-4c7e-94cd-05ee65d699e1\") " Nov 25 11:54:25 crc kubenswrapper[4706]: I1125 11:54:25.219882 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-scripts\") pod \"a4318c64-8dbc-4c7e-94cd-05ee65d699e1\" (UID: \"a4318c64-8dbc-4c7e-94cd-05ee65d699e1\") " Nov 25 11:54:25 crc kubenswrapper[4706]: I1125 11:54:25.221259 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-scripts" (OuterVolumeSpecName: "scripts") pod "a4318c64-8dbc-4c7e-94cd-05ee65d699e1" (UID: "a4318c64-8dbc-4c7e-94cd-05ee65d699e1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:54:25 crc kubenswrapper[4706]: I1125 11:54:25.221683 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "a4318c64-8dbc-4c7e-94cd-05ee65d699e1" (UID: "a4318c64-8dbc-4c7e-94cd-05ee65d699e1"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:54:25 crc kubenswrapper[4706]: I1125 11:54:25.221975 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "a4318c64-8dbc-4c7e-94cd-05ee65d699e1" (UID: "a4318c64-8dbc-4c7e-94cd-05ee65d699e1"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:54:25 crc kubenswrapper[4706]: I1125 11:54:25.223464 4706 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:25 crc kubenswrapper[4706]: I1125 11:54:25.223484 4706 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:25 crc kubenswrapper[4706]: I1125 11:54:25.223522 4706 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:25 crc kubenswrapper[4706]: I1125 11:54:25.225497 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "a4318c64-8dbc-4c7e-94cd-05ee65d699e1" (UID: "a4318c64-8dbc-4c7e-94cd-05ee65d699e1"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:54:25 crc kubenswrapper[4706]: I1125 11:54:25.225616 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "a4318c64-8dbc-4c7e-94cd-05ee65d699e1" (UID: "a4318c64-8dbc-4c7e-94cd-05ee65d699e1"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:54:25 crc kubenswrapper[4706]: I1125 11:54:25.231417 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4318c64-8dbc-4c7e-94cd-05ee65d699e1" (UID: "a4318c64-8dbc-4c7e-94cd-05ee65d699e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:54:25 crc kubenswrapper[4706]: I1125 11:54:25.232628 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-kube-api-access-78jjq" (OuterVolumeSpecName: "kube-api-access-78jjq") pod "a4318c64-8dbc-4c7e-94cd-05ee65d699e1" (UID: "a4318c64-8dbc-4c7e-94cd-05ee65d699e1"). InnerVolumeSpecName "kube-api-access-78jjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:54:25 crc kubenswrapper[4706]: I1125 11:54:25.325588 4706 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:25 crc kubenswrapper[4706]: I1125 11:54:25.325624 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:25 crc kubenswrapper[4706]: I1125 11:54:25.325636 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78jjq\" (UniqueName: \"kubernetes.io/projected/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-kube-api-access-78jjq\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:25 crc kubenswrapper[4706]: I1125 11:54:25.325646 4706 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a4318c64-8dbc-4c7e-94cd-05ee65d699e1-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:25 crc kubenswrapper[4706]: I1125 11:54:25.474830 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-ww65d"] Nov 25 11:54:25 crc kubenswrapper[4706]: I1125 11:54:25.937169 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9225b01e-1067-47de-812a-d9be36adf9d0-etc-swift\") pod \"swift-storage-0\" (UID: \"9225b01e-1067-47de-812a-d9be36adf9d0\") " pod="openstack/swift-storage-0" Nov 25 11:54:25 crc kubenswrapper[4706]: E1125 11:54:25.937505 4706 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 11:54:25 crc kubenswrapper[4706]: E1125 11:54:25.937530 4706 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 11:54:25 crc kubenswrapper[4706]: E1125 11:54:25.937589 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9225b01e-1067-47de-812a-d9be36adf9d0-etc-swift podName:9225b01e-1067-47de-812a-d9be36adf9d0 nodeName:}" failed. No retries permitted until 2025-11-25 11:54:27.937570982 +0000 UTC m=+1076.852128363 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9225b01e-1067-47de-812a-d9be36adf9d0-etc-swift") pod "swift-storage-0" (UID: "9225b01e-1067-47de-812a-d9be36adf9d0") : configmap "swift-ring-files" not found Nov 25 11:54:26 crc kubenswrapper[4706]: I1125 11:54:26.017039 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-vjh52" event={"ID":"679831d3-04d7-4b95-8690-837698ce07f3","Type":"ContainerStarted","Data":"2e6258e8f7c46131b8a759c0c9b3f24bd923e82de32b14ee29fc527c4524773f"} Nov 25 11:54:26 crc kubenswrapper[4706]: I1125 11:54:26.017487 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-vjh52" Nov 25 11:54:26 crc kubenswrapper[4706]: I1125 11:54:26.018225 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-c75mt" Nov 25 11:54:26 crc kubenswrapper[4706]: I1125 11:54:26.018578 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ww65d" event={"ID":"687ee889-8ec7-4754-b45f-b0f087368a37","Type":"ContainerStarted","Data":"cc7dc98fe0784e2c44472ca815af81c209095d2ede683615e8556167536da016"} Nov 25 11:54:26 crc kubenswrapper[4706]: I1125 11:54:26.054962 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-vjh52" podStartSLOduration=4.054936916 podStartE2EDuration="4.054936916s" podCreationTimestamp="2025-11-25 11:54:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:54:26.035497507 +0000 UTC m=+1074.950054908" watchObservedRunningTime="2025-11-25 11:54:26.054936916 +0000 UTC m=+1074.969494297" Nov 25 11:54:26 crc kubenswrapper[4706]: I1125 11:54:26.113369 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-c75mt"] Nov 25 11:54:26 crc kubenswrapper[4706]: I1125 11:54:26.120711 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-c75mt"] Nov 25 11:54:26 crc kubenswrapper[4706]: I1125 11:54:26.193879 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-96a5-account-create-54vg5"] Nov 25 11:54:26 crc kubenswrapper[4706]: I1125 11:54:26.195928 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-96a5-account-create-54vg5" Nov 25 11:54:26 crc kubenswrapper[4706]: I1125 11:54:26.198244 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 25 11:54:26 crc kubenswrapper[4706]: I1125 11:54:26.212389 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-96a5-account-create-54vg5"] Nov 25 11:54:26 crc kubenswrapper[4706]: I1125 11:54:26.233895 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-bnr25"] Nov 25 11:54:26 crc kubenswrapper[4706]: I1125 11:54:26.235082 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-bnr25" Nov 25 11:54:26 crc kubenswrapper[4706]: I1125 11:54:26.251640 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-bnr25"] Nov 25 11:54:26 crc kubenswrapper[4706]: I1125 11:54:26.344162 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmx7l\" (UniqueName: \"kubernetes.io/projected/e1d37c10-6fec-486b-9c0f-f28772cdd96a-kube-api-access-dmx7l\") pod \"glance-96a5-account-create-54vg5\" (UID: \"e1d37c10-6fec-486b-9c0f-f28772cdd96a\") " pod="openstack/glance-96a5-account-create-54vg5" Nov 25 11:54:26 crc kubenswrapper[4706]: I1125 11:54:26.344217 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1d37c10-6fec-486b-9c0f-f28772cdd96a-operator-scripts\") pod \"glance-96a5-account-create-54vg5\" (UID: \"e1d37c10-6fec-486b-9c0f-f28772cdd96a\") " pod="openstack/glance-96a5-account-create-54vg5" Nov 25 11:54:26 crc kubenswrapper[4706]: I1125 11:54:26.344351 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bb8bf03-9489-462f-a011-ce81bd934976-operator-scripts\") pod \"glance-db-create-bnr25\" (UID: \"8bb8bf03-9489-462f-a011-ce81bd934976\") " pod="openstack/glance-db-create-bnr25" Nov 25 11:54:26 crc kubenswrapper[4706]: I1125 11:54:26.344406 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzkbh\" (UniqueName: \"kubernetes.io/projected/8bb8bf03-9489-462f-a011-ce81bd934976-kube-api-access-kzkbh\") pod \"glance-db-create-bnr25\" (UID: \"8bb8bf03-9489-462f-a011-ce81bd934976\") " pod="openstack/glance-db-create-bnr25" Nov 25 11:54:26 crc kubenswrapper[4706]: I1125 11:54:26.445925 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bb8bf03-9489-462f-a011-ce81bd934976-operator-scripts\") pod \"glance-db-create-bnr25\" (UID: \"8bb8bf03-9489-462f-a011-ce81bd934976\") " pod="openstack/glance-db-create-bnr25" Nov 25 11:54:26 crc kubenswrapper[4706]: I1125 11:54:26.446030 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzkbh\" (UniqueName: \"kubernetes.io/projected/8bb8bf03-9489-462f-a011-ce81bd934976-kube-api-access-kzkbh\") pod \"glance-db-create-bnr25\" (UID: \"8bb8bf03-9489-462f-a011-ce81bd934976\") " pod="openstack/glance-db-create-bnr25" Nov 25 11:54:26 crc kubenswrapper[4706]: I1125 11:54:26.446092 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmx7l\" (UniqueName: \"kubernetes.io/projected/e1d37c10-6fec-486b-9c0f-f28772cdd96a-kube-api-access-dmx7l\") pod \"glance-96a5-account-create-54vg5\" (UID: \"e1d37c10-6fec-486b-9c0f-f28772cdd96a\") " pod="openstack/glance-96a5-account-create-54vg5" Nov 25 11:54:26 crc kubenswrapper[4706]: I1125 11:54:26.446147 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1d37c10-6fec-486b-9c0f-f28772cdd96a-operator-scripts\") pod \"glance-96a5-account-create-54vg5\" (UID: \"e1d37c10-6fec-486b-9c0f-f28772cdd96a\") " pod="openstack/glance-96a5-account-create-54vg5" Nov 25 11:54:26 crc kubenswrapper[4706]: I1125 11:54:26.447102 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1d37c10-6fec-486b-9c0f-f28772cdd96a-operator-scripts\") pod \"glance-96a5-account-create-54vg5\" (UID: \"e1d37c10-6fec-486b-9c0f-f28772cdd96a\") " pod="openstack/glance-96a5-account-create-54vg5" Nov 25 11:54:26 crc kubenswrapper[4706]: I1125 11:54:26.447823 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bb8bf03-9489-462f-a011-ce81bd934976-operator-scripts\") pod \"glance-db-create-bnr25\" (UID: \"8bb8bf03-9489-462f-a011-ce81bd934976\") " pod="openstack/glance-db-create-bnr25" Nov 25 11:54:26 crc kubenswrapper[4706]: I1125 11:54:26.473129 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmx7l\" (UniqueName: \"kubernetes.io/projected/e1d37c10-6fec-486b-9c0f-f28772cdd96a-kube-api-access-dmx7l\") pod \"glance-96a5-account-create-54vg5\" (UID: \"e1d37c10-6fec-486b-9c0f-f28772cdd96a\") " pod="openstack/glance-96a5-account-create-54vg5" Nov 25 11:54:26 crc kubenswrapper[4706]: I1125 11:54:26.473129 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzkbh\" (UniqueName: \"kubernetes.io/projected/8bb8bf03-9489-462f-a011-ce81bd934976-kube-api-access-kzkbh\") pod \"glance-db-create-bnr25\" (UID: \"8bb8bf03-9489-462f-a011-ce81bd934976\") " pod="openstack/glance-db-create-bnr25" Nov 25 11:54:26 crc kubenswrapper[4706]: I1125 11:54:26.523831 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-96a5-account-create-54vg5" Nov 25 11:54:26 crc kubenswrapper[4706]: I1125 11:54:26.553961 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-bnr25" Nov 25 11:54:26 crc kubenswrapper[4706]: I1125 11:54:26.908991 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-bnr25"] Nov 25 11:54:26 crc kubenswrapper[4706]: W1125 11:54:26.927285 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8bb8bf03_9489_462f_a011_ce81bd934976.slice/crio-f7f430968ee5c0537234e9c4462fd21e44e8e6874831e908898edf23a351d272 WatchSource:0}: Error finding container f7f430968ee5c0537234e9c4462fd21e44e8e6874831e908898edf23a351d272: Status 404 returned error can't find the container with id f7f430968ee5c0537234e9c4462fd21e44e8e6874831e908898edf23a351d272 Nov 25 11:54:27 crc kubenswrapper[4706]: I1125 11:54:27.017349 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-96a5-account-create-54vg5"] Nov 25 11:54:27 crc kubenswrapper[4706]: W1125 11:54:27.029379 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1d37c10_6fec_486b_9c0f_f28772cdd96a.slice/crio-3e536282ec8018965dc091b8b8c6048b5cc96c0350e7b4831e1f0f26f941e69e WatchSource:0}: Error finding container 3e536282ec8018965dc091b8b8c6048b5cc96c0350e7b4831e1f0f26f941e69e: Status 404 returned error can't find the container with id 3e536282ec8018965dc091b8b8c6048b5cc96c0350e7b4831e1f0f26f941e69e Nov 25 11:54:27 crc kubenswrapper[4706]: I1125 11:54:27.034530 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-bnr25" event={"ID":"8bb8bf03-9489-462f-a011-ce81bd934976","Type":"ContainerStarted","Data":"f7f430968ee5c0537234e9c4462fd21e44e8e6874831e908898edf23a351d272"} Nov 25 11:54:27 crc kubenswrapper[4706]: I1125 11:54:27.932518 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4318c64-8dbc-4c7e-94cd-05ee65d699e1" path="/var/lib/kubelet/pods/a4318c64-8dbc-4c7e-94cd-05ee65d699e1/volumes" Nov 25 11:54:27 crc kubenswrapper[4706]: I1125 11:54:27.973194 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9225b01e-1067-47de-812a-d9be36adf9d0-etc-swift\") pod \"swift-storage-0\" (UID: \"9225b01e-1067-47de-812a-d9be36adf9d0\") " pod="openstack/swift-storage-0" Nov 25 11:54:27 crc kubenswrapper[4706]: E1125 11:54:27.973441 4706 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 11:54:27 crc kubenswrapper[4706]: E1125 11:54:27.973499 4706 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 11:54:27 crc kubenswrapper[4706]: E1125 11:54:27.973560 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9225b01e-1067-47de-812a-d9be36adf9d0-etc-swift podName:9225b01e-1067-47de-812a-d9be36adf9d0 nodeName:}" failed. No retries permitted until 2025-11-25 11:54:31.973543001 +0000 UTC m=+1080.888100382 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9225b01e-1067-47de-812a-d9be36adf9d0-etc-swift") pod "swift-storage-0" (UID: "9225b01e-1067-47de-812a-d9be36adf9d0") : configmap "swift-ring-files" not found Nov 25 11:54:28 crc kubenswrapper[4706]: I1125 11:54:28.043714 4706 generic.go:334] "Generic (PLEG): container finished" podID="e1d37c10-6fec-486b-9c0f-f28772cdd96a" containerID="c35ab05f881ad0005a2d6220cbb9ca002f4a2ef06da1ea0655e5b2f8eece3db4" exitCode=0 Nov 25 11:54:28 crc kubenswrapper[4706]: I1125 11:54:28.043768 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-96a5-account-create-54vg5" event={"ID":"e1d37c10-6fec-486b-9c0f-f28772cdd96a","Type":"ContainerDied","Data":"c35ab05f881ad0005a2d6220cbb9ca002f4a2ef06da1ea0655e5b2f8eece3db4"} Nov 25 11:54:28 crc kubenswrapper[4706]: I1125 11:54:28.043897 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-96a5-account-create-54vg5" event={"ID":"e1d37c10-6fec-486b-9c0f-f28772cdd96a","Type":"ContainerStarted","Data":"3e536282ec8018965dc091b8b8c6048b5cc96c0350e7b4831e1f0f26f941e69e"} Nov 25 11:54:28 crc kubenswrapper[4706]: I1125 11:54:28.046882 4706 generic.go:334] "Generic (PLEG): container finished" podID="8bb8bf03-9489-462f-a011-ce81bd934976" containerID="b95f19efba2e9dc7131a123c020f52012e88bdeb845402fde55128529af192eb" exitCode=0 Nov 25 11:54:28 crc kubenswrapper[4706]: I1125 11:54:28.046936 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-bnr25" event={"ID":"8bb8bf03-9489-462f-a011-ce81bd934976","Type":"ContainerDied","Data":"b95f19efba2e9dc7131a123c020f52012e88bdeb845402fde55128529af192eb"} Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.163574 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-96a5-account-create-54vg5" Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.165462 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-bnr25" Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.250750 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzkbh\" (UniqueName: \"kubernetes.io/projected/8bb8bf03-9489-462f-a011-ce81bd934976-kube-api-access-kzkbh\") pod \"8bb8bf03-9489-462f-a011-ce81bd934976\" (UID: \"8bb8bf03-9489-462f-a011-ce81bd934976\") " Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.250837 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmx7l\" (UniqueName: \"kubernetes.io/projected/e1d37c10-6fec-486b-9c0f-f28772cdd96a-kube-api-access-dmx7l\") pod \"e1d37c10-6fec-486b-9c0f-f28772cdd96a\" (UID: \"e1d37c10-6fec-486b-9c0f-f28772cdd96a\") " Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.250923 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bb8bf03-9489-462f-a011-ce81bd934976-operator-scripts\") pod \"8bb8bf03-9489-462f-a011-ce81bd934976\" (UID: \"8bb8bf03-9489-462f-a011-ce81bd934976\") " Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.251001 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1d37c10-6fec-486b-9c0f-f28772cdd96a-operator-scripts\") pod \"e1d37c10-6fec-486b-9c0f-f28772cdd96a\" (UID: \"e1d37c10-6fec-486b-9c0f-f28772cdd96a\") " Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.251866 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bb8bf03-9489-462f-a011-ce81bd934976-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8bb8bf03-9489-462f-a011-ce81bd934976" (UID: "8bb8bf03-9489-462f-a011-ce81bd934976"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.251904 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d37c10-6fec-486b-9c0f-f28772cdd96a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e1d37c10-6fec-486b-9c0f-f28772cdd96a" (UID: "e1d37c10-6fec-486b-9c0f-f28772cdd96a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.255887 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d37c10-6fec-486b-9c0f-f28772cdd96a-kube-api-access-dmx7l" (OuterVolumeSpecName: "kube-api-access-dmx7l") pod "e1d37c10-6fec-486b-9c0f-f28772cdd96a" (UID: "e1d37c10-6fec-486b-9c0f-f28772cdd96a"). InnerVolumeSpecName "kube-api-access-dmx7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.256217 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bb8bf03-9489-462f-a011-ce81bd934976-kube-api-access-kzkbh" (OuterVolumeSpecName: "kube-api-access-kzkbh") pod "8bb8bf03-9489-462f-a011-ce81bd934976" (UID: "8bb8bf03-9489-462f-a011-ce81bd934976"). InnerVolumeSpecName "kube-api-access-kzkbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.352846 4706 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bb8bf03-9489-462f-a011-ce81bd934976-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.353124 4706 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1d37c10-6fec-486b-9c0f-f28772cdd96a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.353142 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzkbh\" (UniqueName: \"kubernetes.io/projected/8bb8bf03-9489-462f-a011-ce81bd934976-kube-api-access-kzkbh\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.353154 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmx7l\" (UniqueName: \"kubernetes.io/projected/e1d37c10-6fec-486b-9c0f-f28772cdd96a-kube-api-access-dmx7l\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.523918 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-mjpth"] Nov 25 11:54:30 crc kubenswrapper[4706]: E1125 11:54:30.524328 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bb8bf03-9489-462f-a011-ce81bd934976" containerName="mariadb-database-create" Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.524347 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bb8bf03-9489-462f-a011-ce81bd934976" containerName="mariadb-database-create" Nov 25 11:54:30 crc kubenswrapper[4706]: E1125 11:54:30.524383 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1d37c10-6fec-486b-9c0f-f28772cdd96a" containerName="mariadb-account-create" Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.524389 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1d37c10-6fec-486b-9c0f-f28772cdd96a" containerName="mariadb-account-create" Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.524528 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1d37c10-6fec-486b-9c0f-f28772cdd96a" containerName="mariadb-account-create" Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.524551 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bb8bf03-9489-462f-a011-ce81bd934976" containerName="mariadb-database-create" Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.525074 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mjpth" Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.532091 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-mjpth"] Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.556571 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/696b1c53-9d80-42b1-bc7d-4699620c019a-operator-scripts\") pod \"keystone-db-create-mjpth\" (UID: \"696b1c53-9d80-42b1-bc7d-4699620c019a\") " pod="openstack/keystone-db-create-mjpth" Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.556649 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9l8q\" (UniqueName: \"kubernetes.io/projected/696b1c53-9d80-42b1-bc7d-4699620c019a-kube-api-access-s9l8q\") pod \"keystone-db-create-mjpth\" (UID: \"696b1c53-9d80-42b1-bc7d-4699620c019a\") " pod="openstack/keystone-db-create-mjpth" Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.623873 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-745c-account-create-khc42"] Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.624874 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-745c-account-create-khc42" Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.626994 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.638865 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-745c-account-create-khc42"] Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.658373 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/696b1c53-9d80-42b1-bc7d-4699620c019a-operator-scripts\") pod \"keystone-db-create-mjpth\" (UID: \"696b1c53-9d80-42b1-bc7d-4699620c019a\") " pod="openstack/keystone-db-create-mjpth" Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.658457 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6560bf6-0b62-465f-b3ef-f762b5eac76a-operator-scripts\") pod \"keystone-745c-account-create-khc42\" (UID: \"a6560bf6-0b62-465f-b3ef-f762b5eac76a\") " pod="openstack/keystone-745c-account-create-khc42" Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.658515 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnb2s\" (UniqueName: \"kubernetes.io/projected/a6560bf6-0b62-465f-b3ef-f762b5eac76a-kube-api-access-mnb2s\") pod \"keystone-745c-account-create-khc42\" (UID: \"a6560bf6-0b62-465f-b3ef-f762b5eac76a\") " pod="openstack/keystone-745c-account-create-khc42" Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.658545 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9l8q\" (UniqueName: \"kubernetes.io/projected/696b1c53-9d80-42b1-bc7d-4699620c019a-kube-api-access-s9l8q\") pod \"keystone-db-create-mjpth\" (UID: \"696b1c53-9d80-42b1-bc7d-4699620c019a\") " pod="openstack/keystone-db-create-mjpth" Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.659123 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/696b1c53-9d80-42b1-bc7d-4699620c019a-operator-scripts\") pod \"keystone-db-create-mjpth\" (UID: \"696b1c53-9d80-42b1-bc7d-4699620c019a\") " pod="openstack/keystone-db-create-mjpth" Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.677455 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9l8q\" (UniqueName: \"kubernetes.io/projected/696b1c53-9d80-42b1-bc7d-4699620c019a-kube-api-access-s9l8q\") pod \"keystone-db-create-mjpth\" (UID: \"696b1c53-9d80-42b1-bc7d-4699620c019a\") " pod="openstack/keystone-db-create-mjpth" Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.760604 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnb2s\" (UniqueName: \"kubernetes.io/projected/a6560bf6-0b62-465f-b3ef-f762b5eac76a-kube-api-access-mnb2s\") pod \"keystone-745c-account-create-khc42\" (UID: \"a6560bf6-0b62-465f-b3ef-f762b5eac76a\") " pod="openstack/keystone-745c-account-create-khc42" Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.760789 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6560bf6-0b62-465f-b3ef-f762b5eac76a-operator-scripts\") pod \"keystone-745c-account-create-khc42\" (UID: \"a6560bf6-0b62-465f-b3ef-f762b5eac76a\") " pod="openstack/keystone-745c-account-create-khc42" Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.761646 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6560bf6-0b62-465f-b3ef-f762b5eac76a-operator-scripts\") pod \"keystone-745c-account-create-khc42\" (UID: \"a6560bf6-0b62-465f-b3ef-f762b5eac76a\") " pod="openstack/keystone-745c-account-create-khc42" Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.781529 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnb2s\" (UniqueName: \"kubernetes.io/projected/a6560bf6-0b62-465f-b3ef-f762b5eac76a-kube-api-access-mnb2s\") pod \"keystone-745c-account-create-khc42\" (UID: \"a6560bf6-0b62-465f-b3ef-f762b5eac76a\") " pod="openstack/keystone-745c-account-create-khc42" Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.840653 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mjpth" Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.931991 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-fvcgj"] Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.933471 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-fvcgj" Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.940446 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-fvcgj"] Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.943080 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-745c-account-create-khc42" Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.969479 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trqc2\" (UniqueName: \"kubernetes.io/projected/a4f78f8e-f722-4335-8421-35d52edc3181-kube-api-access-trqc2\") pod \"placement-db-create-fvcgj\" (UID: \"a4f78f8e-f722-4335-8421-35d52edc3181\") " pod="openstack/placement-db-create-fvcgj" Nov 25 11:54:30 crc kubenswrapper[4706]: I1125 11:54:30.970019 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4f78f8e-f722-4335-8421-35d52edc3181-operator-scripts\") pod \"placement-db-create-fvcgj\" (UID: \"a4f78f8e-f722-4335-8421-35d52edc3181\") " pod="openstack/placement-db-create-fvcgj" Nov 25 11:54:31 crc kubenswrapper[4706]: I1125 11:54:31.038680 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-0708-account-create-vmb99"] Nov 25 11:54:31 crc kubenswrapper[4706]: I1125 11:54:31.039969 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0708-account-create-vmb99" Nov 25 11:54:31 crc kubenswrapper[4706]: I1125 11:54:31.045628 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 25 11:54:31 crc kubenswrapper[4706]: I1125 11:54:31.052076 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-0708-account-create-vmb99"] Nov 25 11:54:31 crc kubenswrapper[4706]: I1125 11:54:31.071441 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qww2s\" (UniqueName: \"kubernetes.io/projected/244a9875-4efd-40a6-8f29-745b385b516d-kube-api-access-qww2s\") pod \"placement-0708-account-create-vmb99\" (UID: \"244a9875-4efd-40a6-8f29-745b385b516d\") " pod="openstack/placement-0708-account-create-vmb99" Nov 25 11:54:31 crc kubenswrapper[4706]: I1125 11:54:31.071480 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trqc2\" (UniqueName: \"kubernetes.io/projected/a4f78f8e-f722-4335-8421-35d52edc3181-kube-api-access-trqc2\") pod \"placement-db-create-fvcgj\" (UID: \"a4f78f8e-f722-4335-8421-35d52edc3181\") " pod="openstack/placement-db-create-fvcgj" Nov 25 11:54:31 crc kubenswrapper[4706]: I1125 11:54:31.071504 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/244a9875-4efd-40a6-8f29-745b385b516d-operator-scripts\") pod \"placement-0708-account-create-vmb99\" (UID: \"244a9875-4efd-40a6-8f29-745b385b516d\") " pod="openstack/placement-0708-account-create-vmb99" Nov 25 11:54:31 crc kubenswrapper[4706]: I1125 11:54:31.071870 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4f78f8e-f722-4335-8421-35d52edc3181-operator-scripts\") pod \"placement-db-create-fvcgj\" (UID: \"a4f78f8e-f722-4335-8421-35d52edc3181\") " pod="openstack/placement-db-create-fvcgj" Nov 25 11:54:31 crc kubenswrapper[4706]: I1125 11:54:31.073011 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4f78f8e-f722-4335-8421-35d52edc3181-operator-scripts\") pod \"placement-db-create-fvcgj\" (UID: \"a4f78f8e-f722-4335-8421-35d52edc3181\") " pod="openstack/placement-db-create-fvcgj" Nov 25 11:54:31 crc kubenswrapper[4706]: I1125 11:54:31.081343 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ww65d" event={"ID":"687ee889-8ec7-4754-b45f-b0f087368a37","Type":"ContainerStarted","Data":"07428deb95abcd8ccbdb9fc568b237d8733354cf947a5e7717114e4f92a3b411"} Nov 25 11:54:31 crc kubenswrapper[4706]: I1125 11:54:31.083293 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-96a5-account-create-54vg5" event={"ID":"e1d37c10-6fec-486b-9c0f-f28772cdd96a","Type":"ContainerDied","Data":"3e536282ec8018965dc091b8b8c6048b5cc96c0350e7b4831e1f0f26f941e69e"} Nov 25 11:54:31 crc kubenswrapper[4706]: I1125 11:54:31.083343 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e536282ec8018965dc091b8b8c6048b5cc96c0350e7b4831e1f0f26f941e69e" Nov 25 11:54:31 crc kubenswrapper[4706]: I1125 11:54:31.083357 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-96a5-account-create-54vg5" Nov 25 11:54:31 crc kubenswrapper[4706]: I1125 11:54:31.085797 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-bnr25" event={"ID":"8bb8bf03-9489-462f-a011-ce81bd934976","Type":"ContainerDied","Data":"f7f430968ee5c0537234e9c4462fd21e44e8e6874831e908898edf23a351d272"} Nov 25 11:54:31 crc kubenswrapper[4706]: I1125 11:54:31.085841 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7f430968ee5c0537234e9c4462fd21e44e8e6874831e908898edf23a351d272" Nov 25 11:54:31 crc kubenswrapper[4706]: I1125 11:54:31.085899 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-bnr25" Nov 25 11:54:31 crc kubenswrapper[4706]: I1125 11:54:31.101775 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trqc2\" (UniqueName: \"kubernetes.io/projected/a4f78f8e-f722-4335-8421-35d52edc3181-kube-api-access-trqc2\") pod \"placement-db-create-fvcgj\" (UID: \"a4f78f8e-f722-4335-8421-35d52edc3181\") " pod="openstack/placement-db-create-fvcgj" Nov 25 11:54:31 crc kubenswrapper[4706]: I1125 11:54:31.114388 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-ww65d" podStartSLOduration=2.441508304 podStartE2EDuration="7.114371085s" podCreationTimestamp="2025-11-25 11:54:24 +0000 UTC" firstStartedPulling="2025-11-25 11:54:25.503253262 +0000 UTC m=+1074.417810643" lastFinishedPulling="2025-11-25 11:54:30.176116043 +0000 UTC m=+1079.090673424" observedRunningTime="2025-11-25 11:54:31.113454152 +0000 UTC m=+1080.028011533" watchObservedRunningTime="2025-11-25 11:54:31.114371085 +0000 UTC m=+1080.028928456" Nov 25 11:54:31 crc kubenswrapper[4706]: I1125 11:54:31.173628 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qww2s\" (UniqueName: \"kubernetes.io/projected/244a9875-4efd-40a6-8f29-745b385b516d-kube-api-access-qww2s\") pod \"placement-0708-account-create-vmb99\" (UID: \"244a9875-4efd-40a6-8f29-745b385b516d\") " pod="openstack/placement-0708-account-create-vmb99" Nov 25 11:54:31 crc kubenswrapper[4706]: I1125 11:54:31.173690 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/244a9875-4efd-40a6-8f29-745b385b516d-operator-scripts\") pod \"placement-0708-account-create-vmb99\" (UID: \"244a9875-4efd-40a6-8f29-745b385b516d\") " pod="openstack/placement-0708-account-create-vmb99" Nov 25 11:54:31 crc kubenswrapper[4706]: I1125 11:54:31.174725 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/244a9875-4efd-40a6-8f29-745b385b516d-operator-scripts\") pod \"placement-0708-account-create-vmb99\" (UID: \"244a9875-4efd-40a6-8f29-745b385b516d\") " pod="openstack/placement-0708-account-create-vmb99" Nov 25 11:54:31 crc kubenswrapper[4706]: I1125 11:54:31.194123 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qww2s\" (UniqueName: \"kubernetes.io/projected/244a9875-4efd-40a6-8f29-745b385b516d-kube-api-access-qww2s\") pod \"placement-0708-account-create-vmb99\" (UID: \"244a9875-4efd-40a6-8f29-745b385b516d\") " pod="openstack/placement-0708-account-create-vmb99" Nov 25 11:54:31 crc kubenswrapper[4706]: I1125 11:54:31.263666 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-fvcgj" Nov 25 11:54:31 crc kubenswrapper[4706]: I1125 11:54:31.353964 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-mjpth"] Nov 25 11:54:31 crc kubenswrapper[4706]: W1125 11:54:31.360662 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod696b1c53_9d80_42b1_bc7d_4699620c019a.slice/crio-7a876852781bdaff07d06a4b7e32ba1f16d1ca894e61b11ba9947d79d19bdeb8 WatchSource:0}: Error finding container 7a876852781bdaff07d06a4b7e32ba1f16d1ca894e61b11ba9947d79d19bdeb8: Status 404 returned error can't find the container with id 7a876852781bdaff07d06a4b7e32ba1f16d1ca894e61b11ba9947d79d19bdeb8 Nov 25 11:54:31 crc kubenswrapper[4706]: I1125 11:54:31.360870 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0708-account-create-vmb99" Nov 25 11:54:31 crc kubenswrapper[4706]: I1125 11:54:31.578109 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-745c-account-create-khc42"] Nov 25 11:54:31 crc kubenswrapper[4706]: W1125 11:54:31.590710 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6560bf6_0b62_465f_b3ef_f762b5eac76a.slice/crio-72cd86714a319f00fadfcf80fbb0563b909ef628d58027f2c07f144150a2d6a8 WatchSource:0}: Error finding container 72cd86714a319f00fadfcf80fbb0563b909ef628d58027f2c07f144150a2d6a8: Status 404 returned error can't find the container with id 72cd86714a319f00fadfcf80fbb0563b909ef628d58027f2c07f144150a2d6a8 Nov 25 11:54:31 crc kubenswrapper[4706]: I1125 11:54:31.723540 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-fvcgj"] Nov 25 11:54:31 crc kubenswrapper[4706]: W1125 11:54:31.741368 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4f78f8e_f722_4335_8421_35d52edc3181.slice/crio-96f2e0e13def230b0d673ded472858f6c3e5a990d9c17ecc4b648a35475d99f2 WatchSource:0}: Error finding container 96f2e0e13def230b0d673ded472858f6c3e5a990d9c17ecc4b648a35475d99f2: Status 404 returned error can't find the container with id 96f2e0e13def230b0d673ded472858f6c3e5a990d9c17ecc4b648a35475d99f2 Nov 25 11:54:31 crc kubenswrapper[4706]: I1125 11:54:31.877402 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-0708-account-create-vmb99"] Nov 25 11:54:31 crc kubenswrapper[4706]: W1125 11:54:31.884822 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod244a9875_4efd_40a6_8f29_745b385b516d.slice/crio-6e299c53727f779993b5713d17548b49b7413f5631eb8e85e66693784d910923 WatchSource:0}: Error finding container 6e299c53727f779993b5713d17548b49b7413f5631eb8e85e66693784d910923: Status 404 returned error can't find the container with id 6e299c53727f779993b5713d17548b49b7413f5631eb8e85e66693784d910923 Nov 25 11:54:31 crc kubenswrapper[4706]: I1125 11:54:31.890170 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 25 11:54:31 crc kubenswrapper[4706]: I1125 11:54:31.986610 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9225b01e-1067-47de-812a-d9be36adf9d0-etc-swift\") pod \"swift-storage-0\" (UID: \"9225b01e-1067-47de-812a-d9be36adf9d0\") " pod="openstack/swift-storage-0" Nov 25 11:54:31 crc kubenswrapper[4706]: E1125 11:54:31.986851 4706 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 11:54:31 crc kubenswrapper[4706]: E1125 11:54:31.986892 4706 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 11:54:31 crc kubenswrapper[4706]: E1125 11:54:31.986961 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9225b01e-1067-47de-812a-d9be36adf9d0-etc-swift podName:9225b01e-1067-47de-812a-d9be36adf9d0 nodeName:}" failed. No retries permitted until 2025-11-25 11:54:39.986942025 +0000 UTC m=+1088.901499406 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9225b01e-1067-47de-812a-d9be36adf9d0-etc-swift") pod "swift-storage-0" (UID: "9225b01e-1067-47de-812a-d9be36adf9d0") : configmap "swift-ring-files" not found Nov 25 11:54:32 crc kubenswrapper[4706]: I1125 11:54:32.094712 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-745c-account-create-khc42" event={"ID":"a6560bf6-0b62-465f-b3ef-f762b5eac76a","Type":"ContainerStarted","Data":"371f657ce63d0845cb468e81e285d773fc879c04e084353cb247f4bd6451f9e0"} Nov 25 11:54:32 crc kubenswrapper[4706]: I1125 11:54:32.095486 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-745c-account-create-khc42" event={"ID":"a6560bf6-0b62-465f-b3ef-f762b5eac76a","Type":"ContainerStarted","Data":"72cd86714a319f00fadfcf80fbb0563b909ef628d58027f2c07f144150a2d6a8"} Nov 25 11:54:32 crc kubenswrapper[4706]: I1125 11:54:32.096556 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mjpth" event={"ID":"696b1c53-9d80-42b1-bc7d-4699620c019a","Type":"ContainerStarted","Data":"b629c01e730bfb5919089131041fb4c64e0ce2e075ff2dbd6f5e5c35d450ba7f"} Nov 25 11:54:32 crc kubenswrapper[4706]: I1125 11:54:32.096590 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mjpth" event={"ID":"696b1c53-9d80-42b1-bc7d-4699620c019a","Type":"ContainerStarted","Data":"7a876852781bdaff07d06a4b7e32ba1f16d1ca894e61b11ba9947d79d19bdeb8"} Nov 25 11:54:32 crc kubenswrapper[4706]: I1125 11:54:32.099212 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0708-account-create-vmb99" event={"ID":"244a9875-4efd-40a6-8f29-745b385b516d","Type":"ContainerStarted","Data":"6e299c53727f779993b5713d17548b49b7413f5631eb8e85e66693784d910923"} Nov 25 11:54:32 crc kubenswrapper[4706]: I1125 11:54:32.100640 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-fvcgj" event={"ID":"a4f78f8e-f722-4335-8421-35d52edc3181","Type":"ContainerStarted","Data":"96f2e0e13def230b0d673ded472858f6c3e5a990d9c17ecc4b648a35475d99f2"} Nov 25 11:54:32 crc kubenswrapper[4706]: I1125 11:54:32.116231 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-745c-account-create-khc42" podStartSLOduration=2.116205188 podStartE2EDuration="2.116205188s" podCreationTimestamp="2025-11-25 11:54:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:54:32.109054828 +0000 UTC m=+1081.023612219" watchObservedRunningTime="2025-11-25 11:54:32.116205188 +0000 UTC m=+1081.030762569" Nov 25 11:54:32 crc kubenswrapper[4706]: I1125 11:54:32.135147 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-mjpth" podStartSLOduration=2.135125614 podStartE2EDuration="2.135125614s" podCreationTimestamp="2025-11-25 11:54:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:54:32.130741394 +0000 UTC m=+1081.045298775" watchObservedRunningTime="2025-11-25 11:54:32.135125614 +0000 UTC m=+1081.049682995" Nov 25 11:54:32 crc kubenswrapper[4706]: I1125 11:54:32.555523 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 25 11:54:33 crc kubenswrapper[4706]: I1125 11:54:33.110463 4706 generic.go:334] "Generic (PLEG): container finished" podID="a6560bf6-0b62-465f-b3ef-f762b5eac76a" containerID="371f657ce63d0845cb468e81e285d773fc879c04e084353cb247f4bd6451f9e0" exitCode=0 Nov 25 11:54:33 crc kubenswrapper[4706]: I1125 11:54:33.110543 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-745c-account-create-khc42" event={"ID":"a6560bf6-0b62-465f-b3ef-f762b5eac76a","Type":"ContainerDied","Data":"371f657ce63d0845cb468e81e285d773fc879c04e084353cb247f4bd6451f9e0"} Nov 25 11:54:33 crc kubenswrapper[4706]: I1125 11:54:33.112692 4706 generic.go:334] "Generic (PLEG): container finished" podID="696b1c53-9d80-42b1-bc7d-4699620c019a" containerID="b629c01e730bfb5919089131041fb4c64e0ce2e075ff2dbd6f5e5c35d450ba7f" exitCode=0 Nov 25 11:54:33 crc kubenswrapper[4706]: I1125 11:54:33.112746 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mjpth" event={"ID":"696b1c53-9d80-42b1-bc7d-4699620c019a","Type":"ContainerDied","Data":"b629c01e730bfb5919089131041fb4c64e0ce2e075ff2dbd6f5e5c35d450ba7f"} Nov 25 11:54:33 crc kubenswrapper[4706]: I1125 11:54:33.114925 4706 generic.go:334] "Generic (PLEG): container finished" podID="244a9875-4efd-40a6-8f29-745b385b516d" containerID="4b05b750bd5e156e6419d06cfe9cccb45d24544adbd9d61912c0314da0e76c0a" exitCode=0 Nov 25 11:54:33 crc kubenswrapper[4706]: I1125 11:54:33.114982 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0708-account-create-vmb99" event={"ID":"244a9875-4efd-40a6-8f29-745b385b516d","Type":"ContainerDied","Data":"4b05b750bd5e156e6419d06cfe9cccb45d24544adbd9d61912c0314da0e76c0a"} Nov 25 11:54:33 crc kubenswrapper[4706]: I1125 11:54:33.116479 4706 generic.go:334] "Generic (PLEG): container finished" podID="a4f78f8e-f722-4335-8421-35d52edc3181" containerID="f8f41500e05a3bb352954658e334fa9564af44a52176845951cf369e98ab2dfc" exitCode=0 Nov 25 11:54:33 crc kubenswrapper[4706]: I1125 11:54:33.116528 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-fvcgj" event={"ID":"a4f78f8e-f722-4335-8421-35d52edc3181","Type":"ContainerDied","Data":"f8f41500e05a3bb352954658e334fa9564af44a52176845951cf369e98ab2dfc"} Nov 25 11:54:33 crc kubenswrapper[4706]: I1125 11:54:33.310554 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-vjh52" Nov 25 11:54:33 crc kubenswrapper[4706]: I1125 11:54:33.399572 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-zzvxf"] Nov 25 11:54:33 crc kubenswrapper[4706]: I1125 11:54:33.399839 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-zzvxf" podUID="1ed007e8-82f1-4ff7-9f34-ce6656e77cfb" containerName="dnsmasq-dns" containerID="cri-o://5b5db715ec28bff8f551a921a98c6c811d7b69b01abba5fe68fa9717d1e20bb2" gracePeriod=10 Nov 25 11:54:33 crc kubenswrapper[4706]: I1125 11:54:33.917920 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-zzvxf" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.022947 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj4lp\" (UniqueName: \"kubernetes.io/projected/1ed007e8-82f1-4ff7-9f34-ce6656e77cfb-kube-api-access-rj4lp\") pod \"1ed007e8-82f1-4ff7-9f34-ce6656e77cfb\" (UID: \"1ed007e8-82f1-4ff7-9f34-ce6656e77cfb\") " Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.023034 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ed007e8-82f1-4ff7-9f34-ce6656e77cfb-ovsdbserver-sb\") pod \"1ed007e8-82f1-4ff7-9f34-ce6656e77cfb\" (UID: \"1ed007e8-82f1-4ff7-9f34-ce6656e77cfb\") " Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.023077 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ed007e8-82f1-4ff7-9f34-ce6656e77cfb-dns-svc\") pod \"1ed007e8-82f1-4ff7-9f34-ce6656e77cfb\" (UID: \"1ed007e8-82f1-4ff7-9f34-ce6656e77cfb\") " Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.023158 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ed007e8-82f1-4ff7-9f34-ce6656e77cfb-ovsdbserver-nb\") pod \"1ed007e8-82f1-4ff7-9f34-ce6656e77cfb\" (UID: \"1ed007e8-82f1-4ff7-9f34-ce6656e77cfb\") " Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.023181 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ed007e8-82f1-4ff7-9f34-ce6656e77cfb-config\") pod \"1ed007e8-82f1-4ff7-9f34-ce6656e77cfb\" (UID: \"1ed007e8-82f1-4ff7-9f34-ce6656e77cfb\") " Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.028856 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ed007e8-82f1-4ff7-9f34-ce6656e77cfb-kube-api-access-rj4lp" (OuterVolumeSpecName: "kube-api-access-rj4lp") pod "1ed007e8-82f1-4ff7-9f34-ce6656e77cfb" (UID: "1ed007e8-82f1-4ff7-9f34-ce6656e77cfb"). InnerVolumeSpecName "kube-api-access-rj4lp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.067754 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ed007e8-82f1-4ff7-9f34-ce6656e77cfb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1ed007e8-82f1-4ff7-9f34-ce6656e77cfb" (UID: "1ed007e8-82f1-4ff7-9f34-ce6656e77cfb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.076365 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ed007e8-82f1-4ff7-9f34-ce6656e77cfb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1ed007e8-82f1-4ff7-9f34-ce6656e77cfb" (UID: "1ed007e8-82f1-4ff7-9f34-ce6656e77cfb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.079634 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ed007e8-82f1-4ff7-9f34-ce6656e77cfb-config" (OuterVolumeSpecName: "config") pod "1ed007e8-82f1-4ff7-9f34-ce6656e77cfb" (UID: "1ed007e8-82f1-4ff7-9f34-ce6656e77cfb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.081370 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ed007e8-82f1-4ff7-9f34-ce6656e77cfb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1ed007e8-82f1-4ff7-9f34-ce6656e77cfb" (UID: "1ed007e8-82f1-4ff7-9f34-ce6656e77cfb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.124648 4706 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ed007e8-82f1-4ff7-9f34-ce6656e77cfb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.124673 4706 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ed007e8-82f1-4ff7-9f34-ce6656e77cfb-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.124682 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj4lp\" (UniqueName: \"kubernetes.io/projected/1ed007e8-82f1-4ff7-9f34-ce6656e77cfb-kube-api-access-rj4lp\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.124691 4706 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ed007e8-82f1-4ff7-9f34-ce6656e77cfb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.124701 4706 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ed007e8-82f1-4ff7-9f34-ce6656e77cfb-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.131594 4706 generic.go:334] "Generic (PLEG): container finished" podID="1ed007e8-82f1-4ff7-9f34-ce6656e77cfb" containerID="5b5db715ec28bff8f551a921a98c6c811d7b69b01abba5fe68fa9717d1e20bb2" exitCode=0 Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.131657 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-zzvxf" event={"ID":"1ed007e8-82f1-4ff7-9f34-ce6656e77cfb","Type":"ContainerDied","Data":"5b5db715ec28bff8f551a921a98c6c811d7b69b01abba5fe68fa9717d1e20bb2"} Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.131699 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-zzvxf" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.131727 4706 scope.go:117] "RemoveContainer" containerID="5b5db715ec28bff8f551a921a98c6c811d7b69b01abba5fe68fa9717d1e20bb2" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.131713 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-zzvxf" event={"ID":"1ed007e8-82f1-4ff7-9f34-ce6656e77cfb","Type":"ContainerDied","Data":"7b367b2497056a6d1abb97e124931f5296f0dc47309265c76a40d056483f56c4"} Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.182476 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-zzvxf"] Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.189501 4706 scope.go:117] "RemoveContainer" containerID="223b93109e8f853c659aabf71fd41a099f0f1663fbf920d5663a699b50dd8ae9" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.190138 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-zzvxf"] Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.221679 4706 scope.go:117] "RemoveContainer" containerID="5b5db715ec28bff8f551a921a98c6c811d7b69b01abba5fe68fa9717d1e20bb2" Nov 25 11:54:34 crc kubenswrapper[4706]: E1125 11:54:34.235050 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b5db715ec28bff8f551a921a98c6c811d7b69b01abba5fe68fa9717d1e20bb2\": container with ID starting with 5b5db715ec28bff8f551a921a98c6c811d7b69b01abba5fe68fa9717d1e20bb2 not found: ID does not exist" containerID="5b5db715ec28bff8f551a921a98c6c811d7b69b01abba5fe68fa9717d1e20bb2" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.235091 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b5db715ec28bff8f551a921a98c6c811d7b69b01abba5fe68fa9717d1e20bb2"} err="failed to get container status \"5b5db715ec28bff8f551a921a98c6c811d7b69b01abba5fe68fa9717d1e20bb2\": rpc error: code = NotFound desc = could not find container \"5b5db715ec28bff8f551a921a98c6c811d7b69b01abba5fe68fa9717d1e20bb2\": container with ID starting with 5b5db715ec28bff8f551a921a98c6c811d7b69b01abba5fe68fa9717d1e20bb2 not found: ID does not exist" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.235123 4706 scope.go:117] "RemoveContainer" containerID="223b93109e8f853c659aabf71fd41a099f0f1663fbf920d5663a699b50dd8ae9" Nov 25 11:54:34 crc kubenswrapper[4706]: E1125 11:54:34.235473 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"223b93109e8f853c659aabf71fd41a099f0f1663fbf920d5663a699b50dd8ae9\": container with ID starting with 223b93109e8f853c659aabf71fd41a099f0f1663fbf920d5663a699b50dd8ae9 not found: ID does not exist" containerID="223b93109e8f853c659aabf71fd41a099f0f1663fbf920d5663a699b50dd8ae9" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.235509 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"223b93109e8f853c659aabf71fd41a099f0f1663fbf920d5663a699b50dd8ae9"} err="failed to get container status \"223b93109e8f853c659aabf71fd41a099f0f1663fbf920d5663a699b50dd8ae9\": rpc error: code = NotFound desc = could not find container \"223b93109e8f853c659aabf71fd41a099f0f1663fbf920d5663a699b50dd8ae9\": container with ID starting with 223b93109e8f853c659aabf71fd41a099f0f1663fbf920d5663a699b50dd8ae9 not found: ID does not exist" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.458106 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0708-account-create-vmb99" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.529939 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/244a9875-4efd-40a6-8f29-745b385b516d-operator-scripts\") pod \"244a9875-4efd-40a6-8f29-745b385b516d\" (UID: \"244a9875-4efd-40a6-8f29-745b385b516d\") " Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.530010 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qww2s\" (UniqueName: \"kubernetes.io/projected/244a9875-4efd-40a6-8f29-745b385b516d-kube-api-access-qww2s\") pod \"244a9875-4efd-40a6-8f29-745b385b516d\" (UID: \"244a9875-4efd-40a6-8f29-745b385b516d\") " Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.532090 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/244a9875-4efd-40a6-8f29-745b385b516d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "244a9875-4efd-40a6-8f29-745b385b516d" (UID: "244a9875-4efd-40a6-8f29-745b385b516d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.535351 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/244a9875-4efd-40a6-8f29-745b385b516d-kube-api-access-qww2s" (OuterVolumeSpecName: "kube-api-access-qww2s") pod "244a9875-4efd-40a6-8f29-745b385b516d" (UID: "244a9875-4efd-40a6-8f29-745b385b516d"). InnerVolumeSpecName "kube-api-access-qww2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.632529 4706 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/244a9875-4efd-40a6-8f29-745b385b516d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.632566 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qww2s\" (UniqueName: \"kubernetes.io/projected/244a9875-4efd-40a6-8f29-745b385b516d-kube-api-access-qww2s\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.634353 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-fvcgj" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.710844 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mjpth" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.716997 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-745c-account-create-khc42" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.733851 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4f78f8e-f722-4335-8421-35d52edc3181-operator-scripts\") pod \"a4f78f8e-f722-4335-8421-35d52edc3181\" (UID: \"a4f78f8e-f722-4335-8421-35d52edc3181\") " Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.733891 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9l8q\" (UniqueName: \"kubernetes.io/projected/696b1c53-9d80-42b1-bc7d-4699620c019a-kube-api-access-s9l8q\") pod \"696b1c53-9d80-42b1-bc7d-4699620c019a\" (UID: \"696b1c53-9d80-42b1-bc7d-4699620c019a\") " Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.734058 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/696b1c53-9d80-42b1-bc7d-4699620c019a-operator-scripts\") pod \"696b1c53-9d80-42b1-bc7d-4699620c019a\" (UID: \"696b1c53-9d80-42b1-bc7d-4699620c019a\") " Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.734123 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnb2s\" (UniqueName: \"kubernetes.io/projected/a6560bf6-0b62-465f-b3ef-f762b5eac76a-kube-api-access-mnb2s\") pod \"a6560bf6-0b62-465f-b3ef-f762b5eac76a\" (UID: \"a6560bf6-0b62-465f-b3ef-f762b5eac76a\") " Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.734160 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6560bf6-0b62-465f-b3ef-f762b5eac76a-operator-scripts\") pod \"a6560bf6-0b62-465f-b3ef-f762b5eac76a\" (UID: \"a6560bf6-0b62-465f-b3ef-f762b5eac76a\") " Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.734178 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trqc2\" (UniqueName: \"kubernetes.io/projected/a4f78f8e-f722-4335-8421-35d52edc3181-kube-api-access-trqc2\") pod \"a4f78f8e-f722-4335-8421-35d52edc3181\" (UID: \"a4f78f8e-f722-4335-8421-35d52edc3181\") " Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.734760 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/696b1c53-9d80-42b1-bc7d-4699620c019a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "696b1c53-9d80-42b1-bc7d-4699620c019a" (UID: "696b1c53-9d80-42b1-bc7d-4699620c019a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.734808 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6560bf6-0b62-465f-b3ef-f762b5eac76a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a6560bf6-0b62-465f-b3ef-f762b5eac76a" (UID: "a6560bf6-0b62-465f-b3ef-f762b5eac76a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.735490 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4f78f8e-f722-4335-8421-35d52edc3181-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a4f78f8e-f722-4335-8421-35d52edc3181" (UID: "a4f78f8e-f722-4335-8421-35d52edc3181"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.737672 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/696b1c53-9d80-42b1-bc7d-4699620c019a-kube-api-access-s9l8q" (OuterVolumeSpecName: "kube-api-access-s9l8q") pod "696b1c53-9d80-42b1-bc7d-4699620c019a" (UID: "696b1c53-9d80-42b1-bc7d-4699620c019a"). InnerVolumeSpecName "kube-api-access-s9l8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.737742 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4f78f8e-f722-4335-8421-35d52edc3181-kube-api-access-trqc2" (OuterVolumeSpecName: "kube-api-access-trqc2") pod "a4f78f8e-f722-4335-8421-35d52edc3181" (UID: "a4f78f8e-f722-4335-8421-35d52edc3181"). InnerVolumeSpecName "kube-api-access-trqc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.739852 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6560bf6-0b62-465f-b3ef-f762b5eac76a-kube-api-access-mnb2s" (OuterVolumeSpecName: "kube-api-access-mnb2s") pod "a6560bf6-0b62-465f-b3ef-f762b5eac76a" (UID: "a6560bf6-0b62-465f-b3ef-f762b5eac76a"). InnerVolumeSpecName "kube-api-access-mnb2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.835703 4706 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/696b1c53-9d80-42b1-bc7d-4699620c019a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.835741 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnb2s\" (UniqueName: \"kubernetes.io/projected/a6560bf6-0b62-465f-b3ef-f762b5eac76a-kube-api-access-mnb2s\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.835753 4706 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6560bf6-0b62-465f-b3ef-f762b5eac76a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.835765 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trqc2\" (UniqueName: \"kubernetes.io/projected/a4f78f8e-f722-4335-8421-35d52edc3181-kube-api-access-trqc2\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.835774 4706 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4f78f8e-f722-4335-8421-35d52edc3181-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:34 crc kubenswrapper[4706]: I1125 11:54:34.835782 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9l8q\" (UniqueName: \"kubernetes.io/projected/696b1c53-9d80-42b1-bc7d-4699620c019a-kube-api-access-s9l8q\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:35 crc kubenswrapper[4706]: I1125 11:54:35.150812 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-745c-account-create-khc42" event={"ID":"a6560bf6-0b62-465f-b3ef-f762b5eac76a","Type":"ContainerDied","Data":"72cd86714a319f00fadfcf80fbb0563b909ef628d58027f2c07f144150a2d6a8"} Nov 25 11:54:35 crc kubenswrapper[4706]: I1125 11:54:35.150852 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-745c-account-create-khc42" Nov 25 11:54:35 crc kubenswrapper[4706]: I1125 11:54:35.150873 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72cd86714a319f00fadfcf80fbb0563b909ef628d58027f2c07f144150a2d6a8" Nov 25 11:54:35 crc kubenswrapper[4706]: I1125 11:54:35.153385 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mjpth" Nov 25 11:54:35 crc kubenswrapper[4706]: I1125 11:54:35.153431 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mjpth" event={"ID":"696b1c53-9d80-42b1-bc7d-4699620c019a","Type":"ContainerDied","Data":"7a876852781bdaff07d06a4b7e32ba1f16d1ca894e61b11ba9947d79d19bdeb8"} Nov 25 11:54:35 crc kubenswrapper[4706]: I1125 11:54:35.153490 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a876852781bdaff07d06a4b7e32ba1f16d1ca894e61b11ba9947d79d19bdeb8" Nov 25 11:54:35 crc kubenswrapper[4706]: I1125 11:54:35.154782 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0708-account-create-vmb99" event={"ID":"244a9875-4efd-40a6-8f29-745b385b516d","Type":"ContainerDied","Data":"6e299c53727f779993b5713d17548b49b7413f5631eb8e85e66693784d910923"} Nov 25 11:54:35 crc kubenswrapper[4706]: I1125 11:54:35.154841 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e299c53727f779993b5713d17548b49b7413f5631eb8e85e66693784d910923" Nov 25 11:54:35 crc kubenswrapper[4706]: I1125 11:54:35.154954 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0708-account-create-vmb99" Nov 25 11:54:35 crc kubenswrapper[4706]: I1125 11:54:35.156625 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-fvcgj" event={"ID":"a4f78f8e-f722-4335-8421-35d52edc3181","Type":"ContainerDied","Data":"96f2e0e13def230b0d673ded472858f6c3e5a990d9c17ecc4b648a35475d99f2"} Nov 25 11:54:35 crc kubenswrapper[4706]: I1125 11:54:35.156676 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96f2e0e13def230b0d673ded472858f6c3e5a990d9c17ecc4b648a35475d99f2" Nov 25 11:54:35 crc kubenswrapper[4706]: I1125 11:54:35.156688 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-fvcgj" Nov 25 11:54:35 crc kubenswrapper[4706]: I1125 11:54:35.933744 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ed007e8-82f1-4ff7-9f34-ce6656e77cfb" path="/var/lib/kubelet/pods/1ed007e8-82f1-4ff7-9f34-ce6656e77cfb/volumes" Nov 25 11:54:36 crc kubenswrapper[4706]: I1125 11:54:36.378497 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-v7ftf"] Nov 25 11:54:36 crc kubenswrapper[4706]: E1125 11:54:36.378944 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696b1c53-9d80-42b1-bc7d-4699620c019a" containerName="mariadb-database-create" Nov 25 11:54:36 crc kubenswrapper[4706]: I1125 11:54:36.378963 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="696b1c53-9d80-42b1-bc7d-4699620c019a" containerName="mariadb-database-create" Nov 25 11:54:36 crc kubenswrapper[4706]: E1125 11:54:36.378974 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="244a9875-4efd-40a6-8f29-745b385b516d" containerName="mariadb-account-create" Nov 25 11:54:36 crc kubenswrapper[4706]: I1125 11:54:36.378981 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="244a9875-4efd-40a6-8f29-745b385b516d" containerName="mariadb-account-create" Nov 25 11:54:36 crc kubenswrapper[4706]: E1125 11:54:36.379000 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ed007e8-82f1-4ff7-9f34-ce6656e77cfb" containerName="init" Nov 25 11:54:36 crc kubenswrapper[4706]: I1125 11:54:36.379009 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ed007e8-82f1-4ff7-9f34-ce6656e77cfb" containerName="init" Nov 25 11:54:36 crc kubenswrapper[4706]: E1125 11:54:36.379033 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4f78f8e-f722-4335-8421-35d52edc3181" containerName="mariadb-database-create" Nov 25 11:54:36 crc kubenswrapper[4706]: I1125 11:54:36.379041 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4f78f8e-f722-4335-8421-35d52edc3181" containerName="mariadb-database-create" Nov 25 11:54:36 crc kubenswrapper[4706]: E1125 11:54:36.379067 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ed007e8-82f1-4ff7-9f34-ce6656e77cfb" containerName="dnsmasq-dns" Nov 25 11:54:36 crc kubenswrapper[4706]: I1125 11:54:36.379076 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ed007e8-82f1-4ff7-9f34-ce6656e77cfb" containerName="dnsmasq-dns" Nov 25 11:54:36 crc kubenswrapper[4706]: E1125 11:54:36.379087 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6560bf6-0b62-465f-b3ef-f762b5eac76a" containerName="mariadb-account-create" Nov 25 11:54:36 crc kubenswrapper[4706]: I1125 11:54:36.379094 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6560bf6-0b62-465f-b3ef-f762b5eac76a" containerName="mariadb-account-create" Nov 25 11:54:36 crc kubenswrapper[4706]: I1125 11:54:36.379323 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="244a9875-4efd-40a6-8f29-745b385b516d" containerName="mariadb-account-create" Nov 25 11:54:36 crc kubenswrapper[4706]: I1125 11:54:36.379345 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4f78f8e-f722-4335-8421-35d52edc3181" containerName="mariadb-database-create" Nov 25 11:54:36 crc kubenswrapper[4706]: I1125 11:54:36.379362 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ed007e8-82f1-4ff7-9f34-ce6656e77cfb" containerName="dnsmasq-dns" Nov 25 11:54:36 crc kubenswrapper[4706]: I1125 11:54:36.379370 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6560bf6-0b62-465f-b3ef-f762b5eac76a" containerName="mariadb-account-create" Nov 25 11:54:36 crc kubenswrapper[4706]: I1125 11:54:36.379385 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="696b1c53-9d80-42b1-bc7d-4699620c019a" containerName="mariadb-database-create" Nov 25 11:54:36 crc kubenswrapper[4706]: I1125 11:54:36.380062 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-v7ftf" Nov 25 11:54:36 crc kubenswrapper[4706]: I1125 11:54:36.381993 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 25 11:54:36 crc kubenswrapper[4706]: I1125 11:54:36.382074 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-lblxg" Nov 25 11:54:36 crc kubenswrapper[4706]: I1125 11:54:36.391685 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-v7ftf"] Nov 25 11:54:36 crc kubenswrapper[4706]: I1125 11:54:36.563932 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tftqf\" (UniqueName: \"kubernetes.io/projected/a3c43e2c-68e2-4f5d-8c64-c9028a967f7f-kube-api-access-tftqf\") pod \"glance-db-sync-v7ftf\" (UID: \"a3c43e2c-68e2-4f5d-8c64-c9028a967f7f\") " pod="openstack/glance-db-sync-v7ftf" Nov 25 11:54:36 crc kubenswrapper[4706]: I1125 11:54:36.564012 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3c43e2c-68e2-4f5d-8c64-c9028a967f7f-combined-ca-bundle\") pod \"glance-db-sync-v7ftf\" (UID: \"a3c43e2c-68e2-4f5d-8c64-c9028a967f7f\") " pod="openstack/glance-db-sync-v7ftf" Nov 25 11:54:36 crc kubenswrapper[4706]: I1125 11:54:36.564043 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3c43e2c-68e2-4f5d-8c64-c9028a967f7f-config-data\") pod \"glance-db-sync-v7ftf\" (UID: \"a3c43e2c-68e2-4f5d-8c64-c9028a967f7f\") " pod="openstack/glance-db-sync-v7ftf" Nov 25 11:54:36 crc kubenswrapper[4706]: I1125 11:54:36.564145 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a3c43e2c-68e2-4f5d-8c64-c9028a967f7f-db-sync-config-data\") pod \"glance-db-sync-v7ftf\" (UID: \"a3c43e2c-68e2-4f5d-8c64-c9028a967f7f\") " pod="openstack/glance-db-sync-v7ftf" Nov 25 11:54:36 crc kubenswrapper[4706]: I1125 11:54:36.665600 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3c43e2c-68e2-4f5d-8c64-c9028a967f7f-combined-ca-bundle\") pod \"glance-db-sync-v7ftf\" (UID: \"a3c43e2c-68e2-4f5d-8c64-c9028a967f7f\") " pod="openstack/glance-db-sync-v7ftf" Nov 25 11:54:36 crc kubenswrapper[4706]: I1125 11:54:36.665668 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3c43e2c-68e2-4f5d-8c64-c9028a967f7f-config-data\") pod \"glance-db-sync-v7ftf\" (UID: \"a3c43e2c-68e2-4f5d-8c64-c9028a967f7f\") " pod="openstack/glance-db-sync-v7ftf" Nov 25 11:54:36 crc kubenswrapper[4706]: I1125 11:54:36.665757 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a3c43e2c-68e2-4f5d-8c64-c9028a967f7f-db-sync-config-data\") pod \"glance-db-sync-v7ftf\" (UID: \"a3c43e2c-68e2-4f5d-8c64-c9028a967f7f\") " pod="openstack/glance-db-sync-v7ftf" Nov 25 11:54:36 crc kubenswrapper[4706]: I1125 11:54:36.665860 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tftqf\" (UniqueName: \"kubernetes.io/projected/a3c43e2c-68e2-4f5d-8c64-c9028a967f7f-kube-api-access-tftqf\") pod \"glance-db-sync-v7ftf\" (UID: \"a3c43e2c-68e2-4f5d-8c64-c9028a967f7f\") " pod="openstack/glance-db-sync-v7ftf" Nov 25 11:54:36 crc kubenswrapper[4706]: I1125 11:54:36.671432 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3c43e2c-68e2-4f5d-8c64-c9028a967f7f-config-data\") pod \"glance-db-sync-v7ftf\" (UID: \"a3c43e2c-68e2-4f5d-8c64-c9028a967f7f\") " pod="openstack/glance-db-sync-v7ftf" Nov 25 11:54:36 crc kubenswrapper[4706]: I1125 11:54:36.671435 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a3c43e2c-68e2-4f5d-8c64-c9028a967f7f-db-sync-config-data\") pod \"glance-db-sync-v7ftf\" (UID: \"a3c43e2c-68e2-4f5d-8c64-c9028a967f7f\") " pod="openstack/glance-db-sync-v7ftf" Nov 25 11:54:36 crc kubenswrapper[4706]: I1125 11:54:36.671744 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3c43e2c-68e2-4f5d-8c64-c9028a967f7f-combined-ca-bundle\") pod \"glance-db-sync-v7ftf\" (UID: \"a3c43e2c-68e2-4f5d-8c64-c9028a967f7f\") " pod="openstack/glance-db-sync-v7ftf" Nov 25 11:54:36 crc kubenswrapper[4706]: I1125 11:54:36.701876 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tftqf\" (UniqueName: \"kubernetes.io/projected/a3c43e2c-68e2-4f5d-8c64-c9028a967f7f-kube-api-access-tftqf\") pod \"glance-db-sync-v7ftf\" (UID: \"a3c43e2c-68e2-4f5d-8c64-c9028a967f7f\") " pod="openstack/glance-db-sync-v7ftf" Nov 25 11:54:36 crc kubenswrapper[4706]: I1125 11:54:36.996135 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-v7ftf" Nov 25 11:54:37 crc kubenswrapper[4706]: I1125 11:54:37.593072 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-v7ftf"] Nov 25 11:54:38 crc kubenswrapper[4706]: I1125 11:54:38.179218 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-v7ftf" event={"ID":"a3c43e2c-68e2-4f5d-8c64-c9028a967f7f","Type":"ContainerStarted","Data":"d017ddd9caf33980c86f7f4640fd06612d1134e0f4dd84137a826defdc248b44"} Nov 25 11:54:38 crc kubenswrapper[4706]: I1125 11:54:38.181381 4706 generic.go:334] "Generic (PLEG): container finished" podID="687ee889-8ec7-4754-b45f-b0f087368a37" containerID="07428deb95abcd8ccbdb9fc568b237d8733354cf947a5e7717114e4f92a3b411" exitCode=0 Nov 25 11:54:38 crc kubenswrapper[4706]: I1125 11:54:38.181438 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ww65d" event={"ID":"687ee889-8ec7-4754-b45f-b0f087368a37","Type":"ContainerDied","Data":"07428deb95abcd8ccbdb9fc568b237d8733354cf947a5e7717114e4f92a3b411"} Nov 25 11:54:38 crc kubenswrapper[4706]: I1125 11:54:38.183243 4706 generic.go:334] "Generic (PLEG): container finished" podID="ed6df424-6b86-44a1-8157-ca1f33167065" containerID="472e1a1470dd4c66501e097ee3e8181de9d16ed619b7ecc940dc21ed60c2dd09" exitCode=0 Nov 25 11:54:38 crc kubenswrapper[4706]: I1125 11:54:38.183321 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ed6df424-6b86-44a1-8157-ca1f33167065","Type":"ContainerDied","Data":"472e1a1470dd4c66501e097ee3e8181de9d16ed619b7ecc940dc21ed60c2dd09"} Nov 25 11:54:39 crc kubenswrapper[4706]: I1125 11:54:39.193184 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ed6df424-6b86-44a1-8157-ca1f33167065","Type":"ContainerStarted","Data":"83e7f28c12712a2bc4fe90ff43fdbec3e960bfd4432704e6835a237988fcf7c0"} Nov 25 11:54:39 crc kubenswrapper[4706]: I1125 11:54:39.193664 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 25 11:54:39 crc kubenswrapper[4706]: I1125 11:54:39.194822 4706 generic.go:334] "Generic (PLEG): container finished" podID="557c84e6-ab5c-40c1-a3e1-68b513874f9b" containerID="e103b920c3e3166a3cec4818cbdc4804339d57762b5c16546942f4fc4d6c3c61" exitCode=0 Nov 25 11:54:39 crc kubenswrapper[4706]: I1125 11:54:39.194874 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"557c84e6-ab5c-40c1-a3e1-68b513874f9b","Type":"ContainerDied","Data":"e103b920c3e3166a3cec4818cbdc4804339d57762b5c16546942f4fc4d6c3c61"} Nov 25 11:54:39 crc kubenswrapper[4706]: I1125 11:54:39.238893 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=53.874847991 podStartE2EDuration="1m3.238869383s" podCreationTimestamp="2025-11-25 11:53:36 +0000 UTC" firstStartedPulling="2025-11-25 11:53:53.018187007 +0000 UTC m=+1041.932744388" lastFinishedPulling="2025-11-25 11:54:02.382208399 +0000 UTC m=+1051.296765780" observedRunningTime="2025-11-25 11:54:39.220053339 +0000 UTC m=+1088.134610740" watchObservedRunningTime="2025-11-25 11:54:39.238869383 +0000 UTC m=+1088.153426764" Nov 25 11:54:39 crc kubenswrapper[4706]: I1125 11:54:39.585939 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ww65d" Nov 25 11:54:39 crc kubenswrapper[4706]: I1125 11:54:39.721720 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/687ee889-8ec7-4754-b45f-b0f087368a37-combined-ca-bundle\") pod \"687ee889-8ec7-4754-b45f-b0f087368a37\" (UID: \"687ee889-8ec7-4754-b45f-b0f087368a37\") " Nov 25 11:54:39 crc kubenswrapper[4706]: I1125 11:54:39.722016 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/687ee889-8ec7-4754-b45f-b0f087368a37-ring-data-devices\") pod \"687ee889-8ec7-4754-b45f-b0f087368a37\" (UID: \"687ee889-8ec7-4754-b45f-b0f087368a37\") " Nov 25 11:54:39 crc kubenswrapper[4706]: I1125 11:54:39.722051 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/687ee889-8ec7-4754-b45f-b0f087368a37-scripts\") pod \"687ee889-8ec7-4754-b45f-b0f087368a37\" (UID: \"687ee889-8ec7-4754-b45f-b0f087368a37\") " Nov 25 11:54:39 crc kubenswrapper[4706]: I1125 11:54:39.722069 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk5hz\" (UniqueName: \"kubernetes.io/projected/687ee889-8ec7-4754-b45f-b0f087368a37-kube-api-access-lk5hz\") pod \"687ee889-8ec7-4754-b45f-b0f087368a37\" (UID: \"687ee889-8ec7-4754-b45f-b0f087368a37\") " Nov 25 11:54:39 crc kubenswrapper[4706]: I1125 11:54:39.722115 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/687ee889-8ec7-4754-b45f-b0f087368a37-etc-swift\") pod \"687ee889-8ec7-4754-b45f-b0f087368a37\" (UID: \"687ee889-8ec7-4754-b45f-b0f087368a37\") " Nov 25 11:54:39 crc kubenswrapper[4706]: I1125 11:54:39.722723 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/687ee889-8ec7-4754-b45f-b0f087368a37-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "687ee889-8ec7-4754-b45f-b0f087368a37" (UID: "687ee889-8ec7-4754-b45f-b0f087368a37"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:54:39 crc kubenswrapper[4706]: I1125 11:54:39.722775 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/687ee889-8ec7-4754-b45f-b0f087368a37-dispersionconf\") pod \"687ee889-8ec7-4754-b45f-b0f087368a37\" (UID: \"687ee889-8ec7-4754-b45f-b0f087368a37\") " Nov 25 11:54:39 crc kubenswrapper[4706]: I1125 11:54:39.722880 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/687ee889-8ec7-4754-b45f-b0f087368a37-swiftconf\") pod \"687ee889-8ec7-4754-b45f-b0f087368a37\" (UID: \"687ee889-8ec7-4754-b45f-b0f087368a37\") " Nov 25 11:54:39 crc kubenswrapper[4706]: I1125 11:54:39.723194 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/687ee889-8ec7-4754-b45f-b0f087368a37-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "687ee889-8ec7-4754-b45f-b0f087368a37" (UID: "687ee889-8ec7-4754-b45f-b0f087368a37"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:54:39 crc kubenswrapper[4706]: I1125 11:54:39.723426 4706 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/687ee889-8ec7-4754-b45f-b0f087368a37-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:39 crc kubenswrapper[4706]: I1125 11:54:39.727261 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/687ee889-8ec7-4754-b45f-b0f087368a37-kube-api-access-lk5hz" (OuterVolumeSpecName: "kube-api-access-lk5hz") pod "687ee889-8ec7-4754-b45f-b0f087368a37" (UID: "687ee889-8ec7-4754-b45f-b0f087368a37"). InnerVolumeSpecName "kube-api-access-lk5hz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:54:39 crc kubenswrapper[4706]: I1125 11:54:39.732531 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/687ee889-8ec7-4754-b45f-b0f087368a37-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "687ee889-8ec7-4754-b45f-b0f087368a37" (UID: "687ee889-8ec7-4754-b45f-b0f087368a37"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:54:39 crc kubenswrapper[4706]: I1125 11:54:39.768765 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/687ee889-8ec7-4754-b45f-b0f087368a37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "687ee889-8ec7-4754-b45f-b0f087368a37" (UID: "687ee889-8ec7-4754-b45f-b0f087368a37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:54:39 crc kubenswrapper[4706]: I1125 11:54:39.771588 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/687ee889-8ec7-4754-b45f-b0f087368a37-scripts" (OuterVolumeSpecName: "scripts") pod "687ee889-8ec7-4754-b45f-b0f087368a37" (UID: "687ee889-8ec7-4754-b45f-b0f087368a37"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:54:39 crc kubenswrapper[4706]: I1125 11:54:39.774118 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/687ee889-8ec7-4754-b45f-b0f087368a37-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "687ee889-8ec7-4754-b45f-b0f087368a37" (UID: "687ee889-8ec7-4754-b45f-b0f087368a37"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:54:39 crc kubenswrapper[4706]: I1125 11:54:39.825067 4706 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/687ee889-8ec7-4754-b45f-b0f087368a37-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:39 crc kubenswrapper[4706]: I1125 11:54:39.825099 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/687ee889-8ec7-4754-b45f-b0f087368a37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:39 crc kubenswrapper[4706]: I1125 11:54:39.825113 4706 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/687ee889-8ec7-4754-b45f-b0f087368a37-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:39 crc kubenswrapper[4706]: I1125 11:54:39.825122 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lk5hz\" (UniqueName: \"kubernetes.io/projected/687ee889-8ec7-4754-b45f-b0f087368a37-kube-api-access-lk5hz\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:39 crc kubenswrapper[4706]: I1125 11:54:39.825130 4706 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/687ee889-8ec7-4754-b45f-b0f087368a37-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:39 crc kubenswrapper[4706]: I1125 11:54:39.825140 4706 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/687ee889-8ec7-4754-b45f-b0f087368a37-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:40 crc kubenswrapper[4706]: I1125 11:54:40.028276 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9225b01e-1067-47de-812a-d9be36adf9d0-etc-swift\") pod \"swift-storage-0\" (UID: \"9225b01e-1067-47de-812a-d9be36adf9d0\") " pod="openstack/swift-storage-0" Nov 25 11:54:40 crc kubenswrapper[4706]: I1125 11:54:40.034874 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9225b01e-1067-47de-812a-d9be36adf9d0-etc-swift\") pod \"swift-storage-0\" (UID: \"9225b01e-1067-47de-812a-d9be36adf9d0\") " pod="openstack/swift-storage-0" Nov 25 11:54:40 crc kubenswrapper[4706]: I1125 11:54:40.149942 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 25 11:54:40 crc kubenswrapper[4706]: I1125 11:54:40.206896 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"557c84e6-ab5c-40c1-a3e1-68b513874f9b","Type":"ContainerStarted","Data":"a0ce08dbe233b30e509c7b81643703135a7c2e986bc72e2ff04292a28c7dbbaf"} Nov 25 11:54:40 crc kubenswrapper[4706]: I1125 11:54:40.209862 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ww65d" event={"ID":"687ee889-8ec7-4754-b45f-b0f087368a37","Type":"ContainerDied","Data":"cc7dc98fe0784e2c44472ca815af81c209095d2ede683615e8556167536da016"} Nov 25 11:54:40 crc kubenswrapper[4706]: I1125 11:54:40.209900 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ww65d" Nov 25 11:54:40 crc kubenswrapper[4706]: I1125 11:54:40.209930 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc7dc98fe0784e2c44472ca815af81c209095d2ede683615e8556167536da016" Nov 25 11:54:40 crc kubenswrapper[4706]: I1125 11:54:40.491336 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 25 11:54:40 crc kubenswrapper[4706]: W1125 11:54:40.502661 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9225b01e_1067_47de_812a_d9be36adf9d0.slice/crio-a0979632534b3afd5a45ad8aad7776aa2436192daa48d913b9e0f41f8f50595f WatchSource:0}: Error finding container a0979632534b3afd5a45ad8aad7776aa2436192daa48d913b9e0f41f8f50595f: Status 404 returned error can't find the container with id a0979632534b3afd5a45ad8aad7776aa2436192daa48d913b9e0f41f8f50595f Nov 25 11:54:41 crc kubenswrapper[4706]: I1125 11:54:41.220452 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9225b01e-1067-47de-812a-d9be36adf9d0","Type":"ContainerStarted","Data":"a0979632534b3afd5a45ad8aad7776aa2436192daa48d913b9e0f41f8f50595f"} Nov 25 11:54:41 crc kubenswrapper[4706]: I1125 11:54:41.220522 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:54:41 crc kubenswrapper[4706]: I1125 11:54:41.243389 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=54.580199042 podStartE2EDuration="1m5.24336783s" podCreationTimestamp="2025-11-25 11:53:36 +0000 UTC" firstStartedPulling="2025-11-25 11:53:53.076504444 +0000 UTC m=+1041.991061825" lastFinishedPulling="2025-11-25 11:54:03.739673232 +0000 UTC m=+1052.654230613" observedRunningTime="2025-11-25 11:54:41.23898295 +0000 UTC m=+1090.153540351" watchObservedRunningTime="2025-11-25 11:54:41.24336783 +0000 UTC m=+1090.157925211" Nov 25 11:54:41 crc kubenswrapper[4706]: I1125 11:54:41.475685 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-kd65v" podUID="23b72526-ef77-4128-a880-6df46f5db440" containerName="ovn-controller" probeResult="failure" output=< Nov 25 11:54:41 crc kubenswrapper[4706]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 11:54:41 crc kubenswrapper[4706]: > Nov 25 11:54:42 crc kubenswrapper[4706]: I1125 11:54:42.235466 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9225b01e-1067-47de-812a-d9be36adf9d0","Type":"ContainerStarted","Data":"9330e0cf761d191899fa9d3c2cc2e1bfb76996fb1ccfd0ee9123914e7a7e86d4"} Nov 25 11:54:42 crc kubenswrapper[4706]: I1125 11:54:42.235533 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9225b01e-1067-47de-812a-d9be36adf9d0","Type":"ContainerStarted","Data":"4ddcdf5ed7b4879eb39768146f89b51a2e87428517b1a93cc037fbaa1ca6d7ef"} Nov 25 11:54:43 crc kubenswrapper[4706]: I1125 11:54:43.265120 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9225b01e-1067-47de-812a-d9be36adf9d0","Type":"ContainerStarted","Data":"8755073112d92023404b8e5a4fc6361272b36c1dae29075e36bd0e246354fa0d"} Nov 25 11:54:43 crc kubenswrapper[4706]: I1125 11:54:43.265565 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9225b01e-1067-47de-812a-d9be36adf9d0","Type":"ContainerStarted","Data":"f7da971cbe5ae4ca862b3d3cd8c9bf82ab2996d93f876bfd15e876ec7b1f1b30"} Nov 25 11:54:46 crc kubenswrapper[4706]: I1125 11:54:46.457963 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-kd65v" podUID="23b72526-ef77-4128-a880-6df46f5db440" containerName="ovn-controller" probeResult="failure" output=< Nov 25 11:54:46 crc kubenswrapper[4706]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 11:54:46 crc kubenswrapper[4706]: > Nov 25 11:54:46 crc kubenswrapper[4706]: I1125 11:54:46.515764 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-q8rmg" Nov 25 11:54:46 crc kubenswrapper[4706]: I1125 11:54:46.517355 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-q8rmg" Nov 25 11:54:46 crc kubenswrapper[4706]: I1125 11:54:46.749857 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-kd65v-config-kh2cp"] Nov 25 11:54:46 crc kubenswrapper[4706]: E1125 11:54:46.750267 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687ee889-8ec7-4754-b45f-b0f087368a37" containerName="swift-ring-rebalance" Nov 25 11:54:46 crc kubenswrapper[4706]: I1125 11:54:46.750290 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="687ee889-8ec7-4754-b45f-b0f087368a37" containerName="swift-ring-rebalance" Nov 25 11:54:46 crc kubenswrapper[4706]: I1125 11:54:46.750509 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="687ee889-8ec7-4754-b45f-b0f087368a37" containerName="swift-ring-rebalance" Nov 25 11:54:46 crc kubenswrapper[4706]: I1125 11:54:46.751276 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kd65v-config-kh2cp" Nov 25 11:54:46 crc kubenswrapper[4706]: I1125 11:54:46.754884 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 25 11:54:46 crc kubenswrapper[4706]: I1125 11:54:46.807637 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kd65v-config-kh2cp"] Nov 25 11:54:46 crc kubenswrapper[4706]: I1125 11:54:46.850814 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1144fe38-7b82-4670-b725-ddd132d03b53-var-run-ovn\") pod \"ovn-controller-kd65v-config-kh2cp\" (UID: \"1144fe38-7b82-4670-b725-ddd132d03b53\") " pod="openstack/ovn-controller-kd65v-config-kh2cp" Nov 25 11:54:46 crc kubenswrapper[4706]: I1125 11:54:46.850864 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1144fe38-7b82-4670-b725-ddd132d03b53-additional-scripts\") pod \"ovn-controller-kd65v-config-kh2cp\" (UID: \"1144fe38-7b82-4670-b725-ddd132d03b53\") " pod="openstack/ovn-controller-kd65v-config-kh2cp" Nov 25 11:54:46 crc kubenswrapper[4706]: I1125 11:54:46.850893 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1144fe38-7b82-4670-b725-ddd132d03b53-scripts\") pod \"ovn-controller-kd65v-config-kh2cp\" (UID: \"1144fe38-7b82-4670-b725-ddd132d03b53\") " pod="openstack/ovn-controller-kd65v-config-kh2cp" Nov 25 11:54:46 crc kubenswrapper[4706]: I1125 11:54:46.850919 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1144fe38-7b82-4670-b725-ddd132d03b53-var-log-ovn\") pod \"ovn-controller-kd65v-config-kh2cp\" (UID: \"1144fe38-7b82-4670-b725-ddd132d03b53\") " pod="openstack/ovn-controller-kd65v-config-kh2cp" Nov 25 11:54:46 crc kubenswrapper[4706]: I1125 11:54:46.850939 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd6b6\" (UniqueName: \"kubernetes.io/projected/1144fe38-7b82-4670-b725-ddd132d03b53-kube-api-access-vd6b6\") pod \"ovn-controller-kd65v-config-kh2cp\" (UID: \"1144fe38-7b82-4670-b725-ddd132d03b53\") " pod="openstack/ovn-controller-kd65v-config-kh2cp" Nov 25 11:54:46 crc kubenswrapper[4706]: I1125 11:54:46.850955 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1144fe38-7b82-4670-b725-ddd132d03b53-var-run\") pod \"ovn-controller-kd65v-config-kh2cp\" (UID: \"1144fe38-7b82-4670-b725-ddd132d03b53\") " pod="openstack/ovn-controller-kd65v-config-kh2cp" Nov 25 11:54:46 crc kubenswrapper[4706]: I1125 11:54:46.955004 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1144fe38-7b82-4670-b725-ddd132d03b53-var-log-ovn\") pod \"ovn-controller-kd65v-config-kh2cp\" (UID: \"1144fe38-7b82-4670-b725-ddd132d03b53\") " pod="openstack/ovn-controller-kd65v-config-kh2cp" Nov 25 11:54:46 crc kubenswrapper[4706]: I1125 11:54:46.955082 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd6b6\" (UniqueName: \"kubernetes.io/projected/1144fe38-7b82-4670-b725-ddd132d03b53-kube-api-access-vd6b6\") pod \"ovn-controller-kd65v-config-kh2cp\" (UID: \"1144fe38-7b82-4670-b725-ddd132d03b53\") " pod="openstack/ovn-controller-kd65v-config-kh2cp" Nov 25 11:54:46 crc kubenswrapper[4706]: I1125 11:54:46.955100 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1144fe38-7b82-4670-b725-ddd132d03b53-var-run\") pod \"ovn-controller-kd65v-config-kh2cp\" (UID: \"1144fe38-7b82-4670-b725-ddd132d03b53\") " pod="openstack/ovn-controller-kd65v-config-kh2cp" Nov 25 11:54:46 crc kubenswrapper[4706]: I1125 11:54:46.955244 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1144fe38-7b82-4670-b725-ddd132d03b53-var-run-ovn\") pod \"ovn-controller-kd65v-config-kh2cp\" (UID: \"1144fe38-7b82-4670-b725-ddd132d03b53\") " pod="openstack/ovn-controller-kd65v-config-kh2cp" Nov 25 11:54:46 crc kubenswrapper[4706]: I1125 11:54:46.955275 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1144fe38-7b82-4670-b725-ddd132d03b53-additional-scripts\") pod \"ovn-controller-kd65v-config-kh2cp\" (UID: \"1144fe38-7b82-4670-b725-ddd132d03b53\") " pod="openstack/ovn-controller-kd65v-config-kh2cp" Nov 25 11:54:46 crc kubenswrapper[4706]: I1125 11:54:46.955339 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1144fe38-7b82-4670-b725-ddd132d03b53-scripts\") pod \"ovn-controller-kd65v-config-kh2cp\" (UID: \"1144fe38-7b82-4670-b725-ddd132d03b53\") " pod="openstack/ovn-controller-kd65v-config-kh2cp" Nov 25 11:54:46 crc kubenswrapper[4706]: I1125 11:54:46.955423 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1144fe38-7b82-4670-b725-ddd132d03b53-var-log-ovn\") pod \"ovn-controller-kd65v-config-kh2cp\" (UID: \"1144fe38-7b82-4670-b725-ddd132d03b53\") " pod="openstack/ovn-controller-kd65v-config-kh2cp" Nov 25 11:54:46 crc kubenswrapper[4706]: I1125 11:54:46.955443 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1144fe38-7b82-4670-b725-ddd132d03b53-var-run\") pod \"ovn-controller-kd65v-config-kh2cp\" (UID: \"1144fe38-7b82-4670-b725-ddd132d03b53\") " pod="openstack/ovn-controller-kd65v-config-kh2cp" Nov 25 11:54:46 crc kubenswrapper[4706]: I1125 11:54:46.955497 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1144fe38-7b82-4670-b725-ddd132d03b53-var-run-ovn\") pod \"ovn-controller-kd65v-config-kh2cp\" (UID: \"1144fe38-7b82-4670-b725-ddd132d03b53\") " pod="openstack/ovn-controller-kd65v-config-kh2cp" Nov 25 11:54:46 crc kubenswrapper[4706]: I1125 11:54:46.956237 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1144fe38-7b82-4670-b725-ddd132d03b53-additional-scripts\") pod \"ovn-controller-kd65v-config-kh2cp\" (UID: \"1144fe38-7b82-4670-b725-ddd132d03b53\") " pod="openstack/ovn-controller-kd65v-config-kh2cp" Nov 25 11:54:46 crc kubenswrapper[4706]: I1125 11:54:46.957408 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1144fe38-7b82-4670-b725-ddd132d03b53-scripts\") pod \"ovn-controller-kd65v-config-kh2cp\" (UID: \"1144fe38-7b82-4670-b725-ddd132d03b53\") " pod="openstack/ovn-controller-kd65v-config-kh2cp" Nov 25 11:54:46 crc kubenswrapper[4706]: I1125 11:54:46.975983 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd6b6\" (UniqueName: \"kubernetes.io/projected/1144fe38-7b82-4670-b725-ddd132d03b53-kube-api-access-vd6b6\") pod \"ovn-controller-kd65v-config-kh2cp\" (UID: \"1144fe38-7b82-4670-b725-ddd132d03b53\") " pod="openstack/ovn-controller-kd65v-config-kh2cp" Nov 25 11:54:47 crc kubenswrapper[4706]: I1125 11:54:47.076015 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kd65v-config-kh2cp" Nov 25 11:54:49 crc kubenswrapper[4706]: I1125 11:54:49.495838 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kd65v-config-kh2cp"] Nov 25 11:54:49 crc kubenswrapper[4706]: W1125 11:54:49.595050 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1144fe38_7b82_4670_b725_ddd132d03b53.slice/crio-47d3b98b76a09ac86a61bb9ce5fb0f619367ee930fe74b84fa50e99570d358db WatchSource:0}: Error finding container 47d3b98b76a09ac86a61bb9ce5fb0f619367ee930fe74b84fa50e99570d358db: Status 404 returned error can't find the container with id 47d3b98b76a09ac86a61bb9ce5fb0f619367ee930fe74b84fa50e99570d358db Nov 25 11:54:50 crc kubenswrapper[4706]: I1125 11:54:50.335576 4706 generic.go:334] "Generic (PLEG): container finished" podID="1144fe38-7b82-4670-b725-ddd132d03b53" containerID="6919539afd65d7c98d0e26d0af5427f4ff6e292aa53c8a23caeadcb070322f0d" exitCode=0 Nov 25 11:54:50 crc kubenswrapper[4706]: I1125 11:54:50.335760 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kd65v-config-kh2cp" event={"ID":"1144fe38-7b82-4670-b725-ddd132d03b53","Type":"ContainerDied","Data":"6919539afd65d7c98d0e26d0af5427f4ff6e292aa53c8a23caeadcb070322f0d"} Nov 25 11:54:50 crc kubenswrapper[4706]: I1125 11:54:50.335992 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kd65v-config-kh2cp" event={"ID":"1144fe38-7b82-4670-b725-ddd132d03b53","Type":"ContainerStarted","Data":"47d3b98b76a09ac86a61bb9ce5fb0f619367ee930fe74b84fa50e99570d358db"} Nov 25 11:54:50 crc kubenswrapper[4706]: I1125 11:54:50.340532 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9225b01e-1067-47de-812a-d9be36adf9d0","Type":"ContainerStarted","Data":"339e7ef01141c64c30b728bf1b42d0ebd57fbe2d976f6b78708432f279a28f02"} Nov 25 11:54:50 crc kubenswrapper[4706]: I1125 11:54:50.340566 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9225b01e-1067-47de-812a-d9be36adf9d0","Type":"ContainerStarted","Data":"e3989a7080609dfa5579b5aae36871c2ad82d40742ca7b691bf2cbc785642087"} Nov 25 11:54:50 crc kubenswrapper[4706]: I1125 11:54:50.340576 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9225b01e-1067-47de-812a-d9be36adf9d0","Type":"ContainerStarted","Data":"8af8ab28c5dd4bf64c4fff983c342c93176b55846f8463aa086685b1645e1c83"} Nov 25 11:54:50 crc kubenswrapper[4706]: I1125 11:54:50.340603 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9225b01e-1067-47de-812a-d9be36adf9d0","Type":"ContainerStarted","Data":"33cc24107e0eb39d37f727a382a05157b7a25b4f588fd655e4037e891a84be21"} Nov 25 11:54:50 crc kubenswrapper[4706]: I1125 11:54:50.342514 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-v7ftf" event={"ID":"a3c43e2c-68e2-4f5d-8c64-c9028a967f7f","Type":"ContainerStarted","Data":"66e6568b2e32dd6e98388f8f63cd51ba450fc0656a9e433cd5c1306c071ae803"} Nov 25 11:54:50 crc kubenswrapper[4706]: I1125 11:54:50.373823 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-v7ftf" podStartSLOduration=2.815431888 podStartE2EDuration="14.373806674s" podCreationTimestamp="2025-11-25 11:54:36 +0000 UTC" firstStartedPulling="2025-11-25 11:54:37.60722211 +0000 UTC m=+1086.521779491" lastFinishedPulling="2025-11-25 11:54:49.165596896 +0000 UTC m=+1098.080154277" observedRunningTime="2025-11-25 11:54:50.372657855 +0000 UTC m=+1099.287215236" watchObservedRunningTime="2025-11-25 11:54:50.373806674 +0000 UTC m=+1099.288364055" Nov 25 11:54:51 crc kubenswrapper[4706]: I1125 11:54:51.471250 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-kd65v" Nov 25 11:54:51 crc kubenswrapper[4706]: I1125 11:54:51.712085 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kd65v-config-kh2cp" Nov 25 11:54:51 crc kubenswrapper[4706]: I1125 11:54:51.830770 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1144fe38-7b82-4670-b725-ddd132d03b53-scripts\") pod \"1144fe38-7b82-4670-b725-ddd132d03b53\" (UID: \"1144fe38-7b82-4670-b725-ddd132d03b53\") " Nov 25 11:54:51 crc kubenswrapper[4706]: I1125 11:54:51.830826 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1144fe38-7b82-4670-b725-ddd132d03b53-var-run-ovn\") pod \"1144fe38-7b82-4670-b725-ddd132d03b53\" (UID: \"1144fe38-7b82-4670-b725-ddd132d03b53\") " Nov 25 11:54:51 crc kubenswrapper[4706]: I1125 11:54:51.830875 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1144fe38-7b82-4670-b725-ddd132d03b53-var-run\") pod \"1144fe38-7b82-4670-b725-ddd132d03b53\" (UID: \"1144fe38-7b82-4670-b725-ddd132d03b53\") " Nov 25 11:54:51 crc kubenswrapper[4706]: I1125 11:54:51.830959 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd6b6\" (UniqueName: \"kubernetes.io/projected/1144fe38-7b82-4670-b725-ddd132d03b53-kube-api-access-vd6b6\") pod \"1144fe38-7b82-4670-b725-ddd132d03b53\" (UID: \"1144fe38-7b82-4670-b725-ddd132d03b53\") " Nov 25 11:54:51 crc kubenswrapper[4706]: I1125 11:54:51.831003 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1144fe38-7b82-4670-b725-ddd132d03b53-additional-scripts\") pod \"1144fe38-7b82-4670-b725-ddd132d03b53\" (UID: \"1144fe38-7b82-4670-b725-ddd132d03b53\") " Nov 25 11:54:51 crc kubenswrapper[4706]: I1125 11:54:51.831017 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1144fe38-7b82-4670-b725-ddd132d03b53-var-log-ovn\") pod \"1144fe38-7b82-4670-b725-ddd132d03b53\" (UID: \"1144fe38-7b82-4670-b725-ddd132d03b53\") " Nov 25 11:54:51 crc kubenswrapper[4706]: I1125 11:54:51.831078 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1144fe38-7b82-4670-b725-ddd132d03b53-var-run" (OuterVolumeSpecName: "var-run") pod "1144fe38-7b82-4670-b725-ddd132d03b53" (UID: "1144fe38-7b82-4670-b725-ddd132d03b53"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 11:54:51 crc kubenswrapper[4706]: I1125 11:54:51.831240 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1144fe38-7b82-4670-b725-ddd132d03b53-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "1144fe38-7b82-4670-b725-ddd132d03b53" (UID: "1144fe38-7b82-4670-b725-ddd132d03b53"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 11:54:51 crc kubenswrapper[4706]: I1125 11:54:51.831272 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1144fe38-7b82-4670-b725-ddd132d03b53-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "1144fe38-7b82-4670-b725-ddd132d03b53" (UID: "1144fe38-7b82-4670-b725-ddd132d03b53"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 11:54:51 crc kubenswrapper[4706]: I1125 11:54:51.831409 4706 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1144fe38-7b82-4670-b725-ddd132d03b53-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:51 crc kubenswrapper[4706]: I1125 11:54:51.831421 4706 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1144fe38-7b82-4670-b725-ddd132d03b53-var-run\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:51 crc kubenswrapper[4706]: I1125 11:54:51.831429 4706 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1144fe38-7b82-4670-b725-ddd132d03b53-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:51 crc kubenswrapper[4706]: I1125 11:54:51.831829 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1144fe38-7b82-4670-b725-ddd132d03b53-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "1144fe38-7b82-4670-b725-ddd132d03b53" (UID: "1144fe38-7b82-4670-b725-ddd132d03b53"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:54:51 crc kubenswrapper[4706]: I1125 11:54:51.832005 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1144fe38-7b82-4670-b725-ddd132d03b53-scripts" (OuterVolumeSpecName: "scripts") pod "1144fe38-7b82-4670-b725-ddd132d03b53" (UID: "1144fe38-7b82-4670-b725-ddd132d03b53"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:54:51 crc kubenswrapper[4706]: I1125 11:54:51.838038 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1144fe38-7b82-4670-b725-ddd132d03b53-kube-api-access-vd6b6" (OuterVolumeSpecName: "kube-api-access-vd6b6") pod "1144fe38-7b82-4670-b725-ddd132d03b53" (UID: "1144fe38-7b82-4670-b725-ddd132d03b53"). InnerVolumeSpecName "kube-api-access-vd6b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:54:51 crc kubenswrapper[4706]: I1125 11:54:51.932701 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vd6b6\" (UniqueName: \"kubernetes.io/projected/1144fe38-7b82-4670-b725-ddd132d03b53-kube-api-access-vd6b6\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:51 crc kubenswrapper[4706]: I1125 11:54:51.932999 4706 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1144fe38-7b82-4670-b725-ddd132d03b53-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:51 crc kubenswrapper[4706]: I1125 11:54:51.933012 4706 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1144fe38-7b82-4670-b725-ddd132d03b53-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:52 crc kubenswrapper[4706]: I1125 11:54:52.369186 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kd65v-config-kh2cp" event={"ID":"1144fe38-7b82-4670-b725-ddd132d03b53","Type":"ContainerDied","Data":"47d3b98b76a09ac86a61bb9ce5fb0f619367ee930fe74b84fa50e99570d358db"} Nov 25 11:54:52 crc kubenswrapper[4706]: I1125 11:54:52.369232 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47d3b98b76a09ac86a61bb9ce5fb0f619367ee930fe74b84fa50e99570d358db" Nov 25 11:54:52 crc kubenswrapper[4706]: I1125 11:54:52.369313 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kd65v-config-kh2cp" Nov 25 11:54:52 crc kubenswrapper[4706]: I1125 11:54:52.375002 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9225b01e-1067-47de-812a-d9be36adf9d0","Type":"ContainerStarted","Data":"c5ad595bece15f78e4f788783fde40744bf6f4471d005dd827c529f8d149bd09"} Nov 25 11:54:52 crc kubenswrapper[4706]: I1125 11:54:52.375059 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9225b01e-1067-47de-812a-d9be36adf9d0","Type":"ContainerStarted","Data":"7cf53ff4839a32e88fb0151dba2edaa44ae1428dda0e0ec41ee36febd7712773"} Nov 25 11:54:52 crc kubenswrapper[4706]: I1125 11:54:52.375084 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9225b01e-1067-47de-812a-d9be36adf9d0","Type":"ContainerStarted","Data":"2110222566e7e922191bf7f955334ecc05ff304631ec51fab645cbf1aef65e8e"} Nov 25 11:54:52 crc kubenswrapper[4706]: I1125 11:54:52.375095 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9225b01e-1067-47de-812a-d9be36adf9d0","Type":"ContainerStarted","Data":"ca8451227320ce0a0b44bad08f692b686208d1de636a892f0d5714688b685bef"} Nov 25 11:54:52 crc kubenswrapper[4706]: I1125 11:54:52.823518 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-kd65v-config-kh2cp"] Nov 25 11:54:52 crc kubenswrapper[4706]: I1125 11:54:52.831006 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-kd65v-config-kh2cp"] Nov 25 11:54:52 crc kubenswrapper[4706]: I1125 11:54:52.938728 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-kd65v-config-v9ksn"] Nov 25 11:54:52 crc kubenswrapper[4706]: E1125 11:54:52.939101 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1144fe38-7b82-4670-b725-ddd132d03b53" containerName="ovn-config" Nov 25 11:54:52 crc kubenswrapper[4706]: I1125 11:54:52.939122 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="1144fe38-7b82-4670-b725-ddd132d03b53" containerName="ovn-config" Nov 25 11:54:52 crc kubenswrapper[4706]: I1125 11:54:52.939334 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="1144fe38-7b82-4670-b725-ddd132d03b53" containerName="ovn-config" Nov 25 11:54:52 crc kubenswrapper[4706]: I1125 11:54:52.939851 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kd65v-config-v9ksn" Nov 25 11:54:52 crc kubenswrapper[4706]: I1125 11:54:52.941570 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 25 11:54:52 crc kubenswrapper[4706]: I1125 11:54:52.947931 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kd65v-config-v9ksn"] Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.047818 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd5a3831-54c7-4f01-913b-4eb3c086aef6-var-run-ovn\") pod \"ovn-controller-kd65v-config-v9ksn\" (UID: \"bd5a3831-54c7-4f01-913b-4eb3c086aef6\") " pod="openstack/ovn-controller-kd65v-config-v9ksn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.047899 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bd5a3831-54c7-4f01-913b-4eb3c086aef6-additional-scripts\") pod \"ovn-controller-kd65v-config-v9ksn\" (UID: \"bd5a3831-54c7-4f01-913b-4eb3c086aef6\") " pod="openstack/ovn-controller-kd65v-config-v9ksn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.047917 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82rk6\" (UniqueName: \"kubernetes.io/projected/bd5a3831-54c7-4f01-913b-4eb3c086aef6-kube-api-access-82rk6\") pod \"ovn-controller-kd65v-config-v9ksn\" (UID: \"bd5a3831-54c7-4f01-913b-4eb3c086aef6\") " pod="openstack/ovn-controller-kd65v-config-v9ksn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.047939 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bd5a3831-54c7-4f01-913b-4eb3c086aef6-var-log-ovn\") pod \"ovn-controller-kd65v-config-v9ksn\" (UID: \"bd5a3831-54c7-4f01-913b-4eb3c086aef6\") " pod="openstack/ovn-controller-kd65v-config-v9ksn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.047986 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd5a3831-54c7-4f01-913b-4eb3c086aef6-scripts\") pod \"ovn-controller-kd65v-config-v9ksn\" (UID: \"bd5a3831-54c7-4f01-913b-4eb3c086aef6\") " pod="openstack/ovn-controller-kd65v-config-v9ksn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.048096 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bd5a3831-54c7-4f01-913b-4eb3c086aef6-var-run\") pod \"ovn-controller-kd65v-config-v9ksn\" (UID: \"bd5a3831-54c7-4f01-913b-4eb3c086aef6\") " pod="openstack/ovn-controller-kd65v-config-v9ksn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.149535 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd5a3831-54c7-4f01-913b-4eb3c086aef6-scripts\") pod \"ovn-controller-kd65v-config-v9ksn\" (UID: \"bd5a3831-54c7-4f01-913b-4eb3c086aef6\") " pod="openstack/ovn-controller-kd65v-config-v9ksn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.149647 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bd5a3831-54c7-4f01-913b-4eb3c086aef6-var-run\") pod \"ovn-controller-kd65v-config-v9ksn\" (UID: \"bd5a3831-54c7-4f01-913b-4eb3c086aef6\") " pod="openstack/ovn-controller-kd65v-config-v9ksn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.149744 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd5a3831-54c7-4f01-913b-4eb3c086aef6-var-run-ovn\") pod \"ovn-controller-kd65v-config-v9ksn\" (UID: \"bd5a3831-54c7-4f01-913b-4eb3c086aef6\") " pod="openstack/ovn-controller-kd65v-config-v9ksn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.149784 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bd5a3831-54c7-4f01-913b-4eb3c086aef6-additional-scripts\") pod \"ovn-controller-kd65v-config-v9ksn\" (UID: \"bd5a3831-54c7-4f01-913b-4eb3c086aef6\") " pod="openstack/ovn-controller-kd65v-config-v9ksn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.149809 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82rk6\" (UniqueName: \"kubernetes.io/projected/bd5a3831-54c7-4f01-913b-4eb3c086aef6-kube-api-access-82rk6\") pod \"ovn-controller-kd65v-config-v9ksn\" (UID: \"bd5a3831-54c7-4f01-913b-4eb3c086aef6\") " pod="openstack/ovn-controller-kd65v-config-v9ksn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.149835 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bd5a3831-54c7-4f01-913b-4eb3c086aef6-var-log-ovn\") pod \"ovn-controller-kd65v-config-v9ksn\" (UID: \"bd5a3831-54c7-4f01-913b-4eb3c086aef6\") " pod="openstack/ovn-controller-kd65v-config-v9ksn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.150190 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bd5a3831-54c7-4f01-913b-4eb3c086aef6-var-log-ovn\") pod \"ovn-controller-kd65v-config-v9ksn\" (UID: \"bd5a3831-54c7-4f01-913b-4eb3c086aef6\") " pod="openstack/ovn-controller-kd65v-config-v9ksn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.152688 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd5a3831-54c7-4f01-913b-4eb3c086aef6-scripts\") pod \"ovn-controller-kd65v-config-v9ksn\" (UID: \"bd5a3831-54c7-4f01-913b-4eb3c086aef6\") " pod="openstack/ovn-controller-kd65v-config-v9ksn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.152765 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bd5a3831-54c7-4f01-913b-4eb3c086aef6-var-run\") pod \"ovn-controller-kd65v-config-v9ksn\" (UID: \"bd5a3831-54c7-4f01-913b-4eb3c086aef6\") " pod="openstack/ovn-controller-kd65v-config-v9ksn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.152811 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd5a3831-54c7-4f01-913b-4eb3c086aef6-var-run-ovn\") pod \"ovn-controller-kd65v-config-v9ksn\" (UID: \"bd5a3831-54c7-4f01-913b-4eb3c086aef6\") " pod="openstack/ovn-controller-kd65v-config-v9ksn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.153289 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bd5a3831-54c7-4f01-913b-4eb3c086aef6-additional-scripts\") pod \"ovn-controller-kd65v-config-v9ksn\" (UID: \"bd5a3831-54c7-4f01-913b-4eb3c086aef6\") " pod="openstack/ovn-controller-kd65v-config-v9ksn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.179447 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82rk6\" (UniqueName: \"kubernetes.io/projected/bd5a3831-54c7-4f01-913b-4eb3c086aef6-kube-api-access-82rk6\") pod \"ovn-controller-kd65v-config-v9ksn\" (UID: \"bd5a3831-54c7-4f01-913b-4eb3c086aef6\") " pod="openstack/ovn-controller-kd65v-config-v9ksn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.307682 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kd65v-config-v9ksn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.394999 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9225b01e-1067-47de-812a-d9be36adf9d0","Type":"ContainerStarted","Data":"ef35be7d0cb33311b235050ca06f82956aad4e08529781919d2ac1e73fe49df2"} Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.395243 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9225b01e-1067-47de-812a-d9be36adf9d0","Type":"ContainerStarted","Data":"44655ebc25065e8e828c9bf62630a600c16595f5dc6ca93ab3792cb4013f9bfc"} Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.395260 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9225b01e-1067-47de-812a-d9be36adf9d0","Type":"ContainerStarted","Data":"078e7e6212d417d58c800300c5ecd8bf8a5ea9d477cb6f6d50f89b30dc270725"} Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.432547 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=19.649269402 podStartE2EDuration="30.432523801s" podCreationTimestamp="2025-11-25 11:54:23 +0000 UTC" firstStartedPulling="2025-11-25 11:54:40.506106126 +0000 UTC m=+1089.420663507" lastFinishedPulling="2025-11-25 11:54:51.289360515 +0000 UTC m=+1100.203917906" observedRunningTime="2025-11-25 11:54:53.43047592 +0000 UTC m=+1102.345033321" watchObservedRunningTime="2025-11-25 11:54:53.432523801 +0000 UTC m=+1102.347081182" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.697125 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-l9qhn"] Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.698875 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-l9qhn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.701191 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.712820 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-l9qhn"] Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.762232 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7857166d-6bfe-4740-a310-ce20dc486ab2-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-l9qhn\" (UID: \"7857166d-6bfe-4740-a310-ce20dc486ab2\") " pod="openstack/dnsmasq-dns-764c5664d7-l9qhn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.762374 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7857166d-6bfe-4740-a310-ce20dc486ab2-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-l9qhn\" (UID: \"7857166d-6bfe-4740-a310-ce20dc486ab2\") " pod="openstack/dnsmasq-dns-764c5664d7-l9qhn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.762519 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7857166d-6bfe-4740-a310-ce20dc486ab2-dns-svc\") pod \"dnsmasq-dns-764c5664d7-l9qhn\" (UID: \"7857166d-6bfe-4740-a310-ce20dc486ab2\") " pod="openstack/dnsmasq-dns-764c5664d7-l9qhn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.762651 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7857166d-6bfe-4740-a310-ce20dc486ab2-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-l9qhn\" (UID: \"7857166d-6bfe-4740-a310-ce20dc486ab2\") " pod="openstack/dnsmasq-dns-764c5664d7-l9qhn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.762706 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8djjf\" (UniqueName: \"kubernetes.io/projected/7857166d-6bfe-4740-a310-ce20dc486ab2-kube-api-access-8djjf\") pod \"dnsmasq-dns-764c5664d7-l9qhn\" (UID: \"7857166d-6bfe-4740-a310-ce20dc486ab2\") " pod="openstack/dnsmasq-dns-764c5664d7-l9qhn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.762744 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7857166d-6bfe-4740-a310-ce20dc486ab2-config\") pod \"dnsmasq-dns-764c5664d7-l9qhn\" (UID: \"7857166d-6bfe-4740-a310-ce20dc486ab2\") " pod="openstack/dnsmasq-dns-764c5664d7-l9qhn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.771997 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kd65v-config-v9ksn"] Nov 25 11:54:53 crc kubenswrapper[4706]: W1125 11:54:53.780938 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd5a3831_54c7_4f01_913b_4eb3c086aef6.slice/crio-9c0fcd6920942e24d6ff7a7180b60a50d24431e7723c5b1afa16bc41f90601ee WatchSource:0}: Error finding container 9c0fcd6920942e24d6ff7a7180b60a50d24431e7723c5b1afa16bc41f90601ee: Status 404 returned error can't find the container with id 9c0fcd6920942e24d6ff7a7180b60a50d24431e7723c5b1afa16bc41f90601ee Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.863863 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7857166d-6bfe-4740-a310-ce20dc486ab2-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-l9qhn\" (UID: \"7857166d-6bfe-4740-a310-ce20dc486ab2\") " pod="openstack/dnsmasq-dns-764c5664d7-l9qhn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.864388 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7857166d-6bfe-4740-a310-ce20dc486ab2-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-l9qhn\" (UID: \"7857166d-6bfe-4740-a310-ce20dc486ab2\") " pod="openstack/dnsmasq-dns-764c5664d7-l9qhn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.864441 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7857166d-6bfe-4740-a310-ce20dc486ab2-dns-svc\") pod \"dnsmasq-dns-764c5664d7-l9qhn\" (UID: \"7857166d-6bfe-4740-a310-ce20dc486ab2\") " pod="openstack/dnsmasq-dns-764c5664d7-l9qhn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.864484 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7857166d-6bfe-4740-a310-ce20dc486ab2-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-l9qhn\" (UID: \"7857166d-6bfe-4740-a310-ce20dc486ab2\") " pod="openstack/dnsmasq-dns-764c5664d7-l9qhn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.864511 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8djjf\" (UniqueName: \"kubernetes.io/projected/7857166d-6bfe-4740-a310-ce20dc486ab2-kube-api-access-8djjf\") pod \"dnsmasq-dns-764c5664d7-l9qhn\" (UID: \"7857166d-6bfe-4740-a310-ce20dc486ab2\") " pod="openstack/dnsmasq-dns-764c5664d7-l9qhn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.864532 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7857166d-6bfe-4740-a310-ce20dc486ab2-config\") pod \"dnsmasq-dns-764c5664d7-l9qhn\" (UID: \"7857166d-6bfe-4740-a310-ce20dc486ab2\") " pod="openstack/dnsmasq-dns-764c5664d7-l9qhn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.865631 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7857166d-6bfe-4740-a310-ce20dc486ab2-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-l9qhn\" (UID: \"7857166d-6bfe-4740-a310-ce20dc486ab2\") " pod="openstack/dnsmasq-dns-764c5664d7-l9qhn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.865934 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7857166d-6bfe-4740-a310-ce20dc486ab2-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-l9qhn\" (UID: \"7857166d-6bfe-4740-a310-ce20dc486ab2\") " pod="openstack/dnsmasq-dns-764c5664d7-l9qhn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.865959 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7857166d-6bfe-4740-a310-ce20dc486ab2-config\") pod \"dnsmasq-dns-764c5664d7-l9qhn\" (UID: \"7857166d-6bfe-4740-a310-ce20dc486ab2\") " pod="openstack/dnsmasq-dns-764c5664d7-l9qhn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.866463 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7857166d-6bfe-4740-a310-ce20dc486ab2-dns-svc\") pod \"dnsmasq-dns-764c5664d7-l9qhn\" (UID: \"7857166d-6bfe-4740-a310-ce20dc486ab2\") " pod="openstack/dnsmasq-dns-764c5664d7-l9qhn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.866722 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7857166d-6bfe-4740-a310-ce20dc486ab2-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-l9qhn\" (UID: \"7857166d-6bfe-4740-a310-ce20dc486ab2\") " pod="openstack/dnsmasq-dns-764c5664d7-l9qhn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.886930 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8djjf\" (UniqueName: \"kubernetes.io/projected/7857166d-6bfe-4740-a310-ce20dc486ab2-kube-api-access-8djjf\") pod \"dnsmasq-dns-764c5664d7-l9qhn\" (UID: \"7857166d-6bfe-4740-a310-ce20dc486ab2\") " pod="openstack/dnsmasq-dns-764c5664d7-l9qhn" Nov 25 11:54:53 crc kubenswrapper[4706]: I1125 11:54:53.934986 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1144fe38-7b82-4670-b725-ddd132d03b53" path="/var/lib/kubelet/pods/1144fe38-7b82-4670-b725-ddd132d03b53/volumes" Nov 25 11:54:54 crc kubenswrapper[4706]: I1125 11:54:54.017280 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-l9qhn" Nov 25 11:54:54 crc kubenswrapper[4706]: I1125 11:54:54.405566 4706 generic.go:334] "Generic (PLEG): container finished" podID="bd5a3831-54c7-4f01-913b-4eb3c086aef6" containerID="31e31f09eca2ee808d40a58976f9568e28a0956920ef055ff3a9b21a43ef06a5" exitCode=0 Nov 25 11:54:54 crc kubenswrapper[4706]: I1125 11:54:54.405634 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kd65v-config-v9ksn" event={"ID":"bd5a3831-54c7-4f01-913b-4eb3c086aef6","Type":"ContainerDied","Data":"31e31f09eca2ee808d40a58976f9568e28a0956920ef055ff3a9b21a43ef06a5"} Nov 25 11:54:54 crc kubenswrapper[4706]: I1125 11:54:54.405945 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kd65v-config-v9ksn" event={"ID":"bd5a3831-54c7-4f01-913b-4eb3c086aef6","Type":"ContainerStarted","Data":"9c0fcd6920942e24d6ff7a7180b60a50d24431e7723c5b1afa16bc41f90601ee"} Nov 25 11:54:54 crc kubenswrapper[4706]: I1125 11:54:54.440665 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-l9qhn"] Nov 25 11:54:54 crc kubenswrapper[4706]: W1125 11:54:54.453456 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7857166d_6bfe_4740_a310_ce20dc486ab2.slice/crio-8d7645da1ea5649a4e668bb14135fdc1fca4a4a15edf026218e41a2f852e50e5 WatchSource:0}: Error finding container 8d7645da1ea5649a4e668bb14135fdc1fca4a4a15edf026218e41a2f852e50e5: Status 404 returned error can't find the container with id 8d7645da1ea5649a4e668bb14135fdc1fca4a4a15edf026218e41a2f852e50e5 Nov 25 11:54:55 crc kubenswrapper[4706]: I1125 11:54:55.415333 4706 generic.go:334] "Generic (PLEG): container finished" podID="7857166d-6bfe-4740-a310-ce20dc486ab2" containerID="07bc7dccd48883dd5459a6a81099785eec9ac893b94bbf213ba9e3ba9df81e02" exitCode=0 Nov 25 11:54:55 crc kubenswrapper[4706]: I1125 11:54:55.415397 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-l9qhn" event={"ID":"7857166d-6bfe-4740-a310-ce20dc486ab2","Type":"ContainerDied","Data":"07bc7dccd48883dd5459a6a81099785eec9ac893b94bbf213ba9e3ba9df81e02"} Nov 25 11:54:55 crc kubenswrapper[4706]: I1125 11:54:55.415667 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-l9qhn" event={"ID":"7857166d-6bfe-4740-a310-ce20dc486ab2","Type":"ContainerStarted","Data":"8d7645da1ea5649a4e668bb14135fdc1fca4a4a15edf026218e41a2f852e50e5"} Nov 25 11:54:55 crc kubenswrapper[4706]: I1125 11:54:55.732380 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kd65v-config-v9ksn" Nov 25 11:54:55 crc kubenswrapper[4706]: I1125 11:54:55.894157 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd5a3831-54c7-4f01-913b-4eb3c086aef6-var-run-ovn\") pod \"bd5a3831-54c7-4f01-913b-4eb3c086aef6\" (UID: \"bd5a3831-54c7-4f01-913b-4eb3c086aef6\") " Nov 25 11:54:55 crc kubenswrapper[4706]: I1125 11:54:55.894415 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5a3831-54c7-4f01-913b-4eb3c086aef6-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "bd5a3831-54c7-4f01-913b-4eb3c086aef6" (UID: "bd5a3831-54c7-4f01-913b-4eb3c086aef6"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 11:54:55 crc kubenswrapper[4706]: I1125 11:54:55.894607 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82rk6\" (UniqueName: \"kubernetes.io/projected/bd5a3831-54c7-4f01-913b-4eb3c086aef6-kube-api-access-82rk6\") pod \"bd5a3831-54c7-4f01-913b-4eb3c086aef6\" (UID: \"bd5a3831-54c7-4f01-913b-4eb3c086aef6\") " Nov 25 11:54:55 crc kubenswrapper[4706]: I1125 11:54:55.894655 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bd5a3831-54c7-4f01-913b-4eb3c086aef6-var-log-ovn\") pod \"bd5a3831-54c7-4f01-913b-4eb3c086aef6\" (UID: \"bd5a3831-54c7-4f01-913b-4eb3c086aef6\") " Nov 25 11:54:55 crc kubenswrapper[4706]: I1125 11:54:55.894704 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd5a3831-54c7-4f01-913b-4eb3c086aef6-scripts\") pod \"bd5a3831-54c7-4f01-913b-4eb3c086aef6\" (UID: \"bd5a3831-54c7-4f01-913b-4eb3c086aef6\") " Nov 25 11:54:55 crc kubenswrapper[4706]: I1125 11:54:55.894750 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bd5a3831-54c7-4f01-913b-4eb3c086aef6-var-run\") pod \"bd5a3831-54c7-4f01-913b-4eb3c086aef6\" (UID: \"bd5a3831-54c7-4f01-913b-4eb3c086aef6\") " Nov 25 11:54:55 crc kubenswrapper[4706]: I1125 11:54:55.894780 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5a3831-54c7-4f01-913b-4eb3c086aef6-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "bd5a3831-54c7-4f01-913b-4eb3c086aef6" (UID: "bd5a3831-54c7-4f01-913b-4eb3c086aef6"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 11:54:55 crc kubenswrapper[4706]: I1125 11:54:55.894793 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bd5a3831-54c7-4f01-913b-4eb3c086aef6-additional-scripts\") pod \"bd5a3831-54c7-4f01-913b-4eb3c086aef6\" (UID: \"bd5a3831-54c7-4f01-913b-4eb3c086aef6\") " Nov 25 11:54:55 crc kubenswrapper[4706]: I1125 11:54:55.894808 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5a3831-54c7-4f01-913b-4eb3c086aef6-var-run" (OuterVolumeSpecName: "var-run") pod "bd5a3831-54c7-4f01-913b-4eb3c086aef6" (UID: "bd5a3831-54c7-4f01-913b-4eb3c086aef6"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 11:54:55 crc kubenswrapper[4706]: I1125 11:54:55.895206 4706 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bd5a3831-54c7-4f01-913b-4eb3c086aef6-var-run\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:55 crc kubenswrapper[4706]: I1125 11:54:55.895226 4706 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd5a3831-54c7-4f01-913b-4eb3c086aef6-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:55 crc kubenswrapper[4706]: I1125 11:54:55.895236 4706 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bd5a3831-54c7-4f01-913b-4eb3c086aef6-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:55 crc kubenswrapper[4706]: I1125 11:54:55.895454 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd5a3831-54c7-4f01-913b-4eb3c086aef6-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "bd5a3831-54c7-4f01-913b-4eb3c086aef6" (UID: "bd5a3831-54c7-4f01-913b-4eb3c086aef6"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:54:55 crc kubenswrapper[4706]: I1125 11:54:55.895848 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd5a3831-54c7-4f01-913b-4eb3c086aef6-scripts" (OuterVolumeSpecName: "scripts") pod "bd5a3831-54c7-4f01-913b-4eb3c086aef6" (UID: "bd5a3831-54c7-4f01-913b-4eb3c086aef6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:54:55 crc kubenswrapper[4706]: I1125 11:54:55.900716 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd5a3831-54c7-4f01-913b-4eb3c086aef6-kube-api-access-82rk6" (OuterVolumeSpecName: "kube-api-access-82rk6") pod "bd5a3831-54c7-4f01-913b-4eb3c086aef6" (UID: "bd5a3831-54c7-4f01-913b-4eb3c086aef6"). InnerVolumeSpecName "kube-api-access-82rk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:54:55 crc kubenswrapper[4706]: I1125 11:54:55.997474 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82rk6\" (UniqueName: \"kubernetes.io/projected/bd5a3831-54c7-4f01-913b-4eb3c086aef6-kube-api-access-82rk6\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:55 crc kubenswrapper[4706]: I1125 11:54:55.997510 4706 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd5a3831-54c7-4f01-913b-4eb3c086aef6-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:55 crc kubenswrapper[4706]: I1125 11:54:55.997520 4706 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bd5a3831-54c7-4f01-913b-4eb3c086aef6-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:56 crc kubenswrapper[4706]: I1125 11:54:56.428060 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-l9qhn" event={"ID":"7857166d-6bfe-4740-a310-ce20dc486ab2","Type":"ContainerStarted","Data":"6a4d713132e0cf289edb560b496bfea4f27dd015a04d73a413ec6d4a51f9726d"} Nov 25 11:54:56 crc kubenswrapper[4706]: I1125 11:54:56.429150 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-l9qhn" Nov 25 11:54:56 crc kubenswrapper[4706]: I1125 11:54:56.430982 4706 generic.go:334] "Generic (PLEG): container finished" podID="a3c43e2c-68e2-4f5d-8c64-c9028a967f7f" containerID="66e6568b2e32dd6e98388f8f63cd51ba450fc0656a9e433cd5c1306c071ae803" exitCode=0 Nov 25 11:54:56 crc kubenswrapper[4706]: I1125 11:54:56.431052 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-v7ftf" event={"ID":"a3c43e2c-68e2-4f5d-8c64-c9028a967f7f","Type":"ContainerDied","Data":"66e6568b2e32dd6e98388f8f63cd51ba450fc0656a9e433cd5c1306c071ae803"} Nov 25 11:54:56 crc kubenswrapper[4706]: I1125 11:54:56.435524 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kd65v-config-v9ksn" event={"ID":"bd5a3831-54c7-4f01-913b-4eb3c086aef6","Type":"ContainerDied","Data":"9c0fcd6920942e24d6ff7a7180b60a50d24431e7723c5b1afa16bc41f90601ee"} Nov 25 11:54:56 crc kubenswrapper[4706]: I1125 11:54:56.435560 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c0fcd6920942e24d6ff7a7180b60a50d24431e7723c5b1afa16bc41f90601ee" Nov 25 11:54:56 crc kubenswrapper[4706]: I1125 11:54:56.435622 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kd65v-config-v9ksn" Nov 25 11:54:56 crc kubenswrapper[4706]: I1125 11:54:56.458245 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-l9qhn" podStartSLOduration=3.458226109 podStartE2EDuration="3.458226109s" podCreationTimestamp="2025-11-25 11:54:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:54:56.452656709 +0000 UTC m=+1105.367214170" watchObservedRunningTime="2025-11-25 11:54:56.458226109 +0000 UTC m=+1105.372783490" Nov 25 11:54:56 crc kubenswrapper[4706]: I1125 11:54:56.802927 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-kd65v-config-v9ksn"] Nov 25 11:54:56 crc kubenswrapper[4706]: I1125 11:54:56.809715 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-kd65v-config-v9ksn"] Nov 25 11:54:57 crc kubenswrapper[4706]: I1125 11:54:57.512483 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 25 11:54:57 crc kubenswrapper[4706]: I1125 11:54:57.818268 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-rs7pp"] Nov 25 11:54:57 crc kubenswrapper[4706]: E1125 11:54:57.818697 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5a3831-54c7-4f01-913b-4eb3c086aef6" containerName="ovn-config" Nov 25 11:54:57 crc kubenswrapper[4706]: I1125 11:54:57.818713 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5a3831-54c7-4f01-913b-4eb3c086aef6" containerName="ovn-config" Nov 25 11:54:57 crc kubenswrapper[4706]: I1125 11:54:57.818940 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5a3831-54c7-4f01-913b-4eb3c086aef6" containerName="ovn-config" Nov 25 11:54:57 crc kubenswrapper[4706]: I1125 11:54:57.819780 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rs7pp" Nov 25 11:54:57 crc kubenswrapper[4706]: I1125 11:54:57.838232 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd88j\" (UniqueName: \"kubernetes.io/projected/4c2d1155-3724-4c94-a5fb-fcf88b53064e-kube-api-access-pd88j\") pod \"barbican-db-create-rs7pp\" (UID: \"4c2d1155-3724-4c94-a5fb-fcf88b53064e\") " pod="openstack/barbican-db-create-rs7pp" Nov 25 11:54:57 crc kubenswrapper[4706]: I1125 11:54:57.838328 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c2d1155-3724-4c94-a5fb-fcf88b53064e-operator-scripts\") pod \"barbican-db-create-rs7pp\" (UID: \"4c2d1155-3724-4c94-a5fb-fcf88b53064e\") " pod="openstack/barbican-db-create-rs7pp" Nov 25 11:54:57 crc kubenswrapper[4706]: I1125 11:54:57.846332 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-rs7pp"] Nov 25 11:54:57 crc kubenswrapper[4706]: I1125 11:54:57.866310 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:54:57 crc kubenswrapper[4706]: I1125 11:54:57.939885 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd88j\" (UniqueName: \"kubernetes.io/projected/4c2d1155-3724-4c94-a5fb-fcf88b53064e-kube-api-access-pd88j\") pod \"barbican-db-create-rs7pp\" (UID: \"4c2d1155-3724-4c94-a5fb-fcf88b53064e\") " pod="openstack/barbican-db-create-rs7pp" Nov 25 11:54:57 crc kubenswrapper[4706]: I1125 11:54:57.940144 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c2d1155-3724-4c94-a5fb-fcf88b53064e-operator-scripts\") pod \"barbican-db-create-rs7pp\" (UID: \"4c2d1155-3724-4c94-a5fb-fcf88b53064e\") " pod="openstack/barbican-db-create-rs7pp" Nov 25 11:54:57 crc kubenswrapper[4706]: I1125 11:54:57.941663 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c2d1155-3724-4c94-a5fb-fcf88b53064e-operator-scripts\") pod \"barbican-db-create-rs7pp\" (UID: \"4c2d1155-3724-4c94-a5fb-fcf88b53064e\") " pod="openstack/barbican-db-create-rs7pp" Nov 25 11:54:57 crc kubenswrapper[4706]: I1125 11:54:57.957355 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd5a3831-54c7-4f01-913b-4eb3c086aef6" path="/var/lib/kubelet/pods/bd5a3831-54c7-4f01-913b-4eb3c086aef6/volumes" Nov 25 11:54:57 crc kubenswrapper[4706]: I1125 11:54:57.977077 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd88j\" (UniqueName: \"kubernetes.io/projected/4c2d1155-3724-4c94-a5fb-fcf88b53064e-kube-api-access-pd88j\") pod \"barbican-db-create-rs7pp\" (UID: \"4c2d1155-3724-4c94-a5fb-fcf88b53064e\") " pod="openstack/barbican-db-create-rs7pp" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.011327 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-7lvvv"] Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.012682 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7lvvv" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.034185 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-7ad8-account-create-vg4bf"] Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.035556 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7ad8-account-create-vg4bf" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.037276 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.047547 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-v7ftf" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.051004 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-7lvvv"] Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.069500 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7ad8-account-create-vg4bf"] Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.144723 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tftqf\" (UniqueName: \"kubernetes.io/projected/a3c43e2c-68e2-4f5d-8c64-c9028a967f7f-kube-api-access-tftqf\") pod \"a3c43e2c-68e2-4f5d-8c64-c9028a967f7f\" (UID: \"a3c43e2c-68e2-4f5d-8c64-c9028a967f7f\") " Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.144795 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3c43e2c-68e2-4f5d-8c64-c9028a967f7f-combined-ca-bundle\") pod \"a3c43e2c-68e2-4f5d-8c64-c9028a967f7f\" (UID: \"a3c43e2c-68e2-4f5d-8c64-c9028a967f7f\") " Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.144881 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3c43e2c-68e2-4f5d-8c64-c9028a967f7f-config-data\") pod \"a3c43e2c-68e2-4f5d-8c64-c9028a967f7f\" (UID: \"a3c43e2c-68e2-4f5d-8c64-c9028a967f7f\") " Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.144974 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a3c43e2c-68e2-4f5d-8c64-c9028a967f7f-db-sync-config-data\") pod \"a3c43e2c-68e2-4f5d-8c64-c9028a967f7f\" (UID: \"a3c43e2c-68e2-4f5d-8c64-c9028a967f7f\") " Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.145242 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzg9s\" (UniqueName: \"kubernetes.io/projected/a3b54223-dba3-409f-a6dc-fc371e46ab31-kube-api-access-hzg9s\") pod \"barbican-7ad8-account-create-vg4bf\" (UID: \"a3b54223-dba3-409f-a6dc-fc371e46ab31\") " pod="openstack/barbican-7ad8-account-create-vg4bf" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.145328 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drfrk\" (UniqueName: \"kubernetes.io/projected/562f2b9a-0768-4613-9711-8df28886eb32-kube-api-access-drfrk\") pod \"cinder-db-create-7lvvv\" (UID: \"562f2b9a-0768-4613-9711-8df28886eb32\") " pod="openstack/cinder-db-create-7lvvv" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.145347 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/562f2b9a-0768-4613-9711-8df28886eb32-operator-scripts\") pod \"cinder-db-create-7lvvv\" (UID: \"562f2b9a-0768-4613-9711-8df28886eb32\") " pod="openstack/cinder-db-create-7lvvv" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.145367 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3b54223-dba3-409f-a6dc-fc371e46ab31-operator-scripts\") pod \"barbican-7ad8-account-create-vg4bf\" (UID: \"a3b54223-dba3-409f-a6dc-fc371e46ab31\") " pod="openstack/barbican-7ad8-account-create-vg4bf" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.147414 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rs7pp" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.166670 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3c43e2c-68e2-4f5d-8c64-c9028a967f7f-kube-api-access-tftqf" (OuterVolumeSpecName: "kube-api-access-tftqf") pod "a3c43e2c-68e2-4f5d-8c64-c9028a967f7f" (UID: "a3c43e2c-68e2-4f5d-8c64-c9028a967f7f"). InnerVolumeSpecName "kube-api-access-tftqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.183477 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3c43e2c-68e2-4f5d-8c64-c9028a967f7f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a3c43e2c-68e2-4f5d-8c64-c9028a967f7f" (UID: "a3c43e2c-68e2-4f5d-8c64-c9028a967f7f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.200920 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3c43e2c-68e2-4f5d-8c64-c9028a967f7f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3c43e2c-68e2-4f5d-8c64-c9028a967f7f" (UID: "a3c43e2c-68e2-4f5d-8c64-c9028a967f7f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.232872 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-r89ww"] Nov 25 11:54:58 crc kubenswrapper[4706]: E1125 11:54:58.233291 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3c43e2c-68e2-4f5d-8c64-c9028a967f7f" containerName="glance-db-sync" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.233394 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3c43e2c-68e2-4f5d-8c64-c9028a967f7f" containerName="glance-db-sync" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.233601 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3c43e2c-68e2-4f5d-8c64-c9028a967f7f" containerName="glance-db-sync" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.234313 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-r89ww" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.238922 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-p74gc" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.239212 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.239703 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.239877 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.241522 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-hncd9"] Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.243175 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-hncd9" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.249333 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drfrk\" (UniqueName: \"kubernetes.io/projected/562f2b9a-0768-4613-9711-8df28886eb32-kube-api-access-drfrk\") pod \"cinder-db-create-7lvvv\" (UID: \"562f2b9a-0768-4613-9711-8df28886eb32\") " pod="openstack/cinder-db-create-7lvvv" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.249378 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/562f2b9a-0768-4613-9711-8df28886eb32-operator-scripts\") pod \"cinder-db-create-7lvvv\" (UID: \"562f2b9a-0768-4613-9711-8df28886eb32\") " pod="openstack/cinder-db-create-7lvvv" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.249395 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3b54223-dba3-409f-a6dc-fc371e46ab31-operator-scripts\") pod \"barbican-7ad8-account-create-vg4bf\" (UID: \"a3b54223-dba3-409f-a6dc-fc371e46ab31\") " pod="openstack/barbican-7ad8-account-create-vg4bf" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.249497 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzg9s\" (UniqueName: \"kubernetes.io/projected/a3b54223-dba3-409f-a6dc-fc371e46ab31-kube-api-access-hzg9s\") pod \"barbican-7ad8-account-create-vg4bf\" (UID: \"a3b54223-dba3-409f-a6dc-fc371e46ab31\") " pod="openstack/barbican-7ad8-account-create-vg4bf" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.249542 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tftqf\" (UniqueName: \"kubernetes.io/projected/a3c43e2c-68e2-4f5d-8c64-c9028a967f7f-kube-api-access-tftqf\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.249553 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3c43e2c-68e2-4f5d-8c64-c9028a967f7f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.249563 4706 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a3c43e2c-68e2-4f5d-8c64-c9028a967f7f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.250706 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/562f2b9a-0768-4613-9711-8df28886eb32-operator-scripts\") pod \"cinder-db-create-7lvvv\" (UID: \"562f2b9a-0768-4613-9711-8df28886eb32\") " pod="openstack/cinder-db-create-7lvvv" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.250770 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-r89ww"] Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.250773 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3b54223-dba3-409f-a6dc-fc371e46ab31-operator-scripts\") pod \"barbican-7ad8-account-create-vg4bf\" (UID: \"a3b54223-dba3-409f-a6dc-fc371e46ab31\") " pod="openstack/barbican-7ad8-account-create-vg4bf" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.275052 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzg9s\" (UniqueName: \"kubernetes.io/projected/a3b54223-dba3-409f-a6dc-fc371e46ab31-kube-api-access-hzg9s\") pod \"barbican-7ad8-account-create-vg4bf\" (UID: \"a3b54223-dba3-409f-a6dc-fc371e46ab31\") " pod="openstack/barbican-7ad8-account-create-vg4bf" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.279399 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3c43e2c-68e2-4f5d-8c64-c9028a967f7f-config-data" (OuterVolumeSpecName: "config-data") pod "a3c43e2c-68e2-4f5d-8c64-c9028a967f7f" (UID: "a3c43e2c-68e2-4f5d-8c64-c9028a967f7f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.284842 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-hncd9"] Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.309798 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drfrk\" (UniqueName: \"kubernetes.io/projected/562f2b9a-0768-4613-9711-8df28886eb32-kube-api-access-drfrk\") pod \"cinder-db-create-7lvvv\" (UID: \"562f2b9a-0768-4613-9711-8df28886eb32\") " pod="openstack/cinder-db-create-7lvvv" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.325385 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-30a4-account-create-wpgb6"] Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.327827 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-30a4-account-create-wpgb6" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.335991 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.351939 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/054fda50-c263-45c4-9bde-2fc9d81c57b1-operator-scripts\") pod \"neutron-db-create-hncd9\" (UID: \"054fda50-c263-45c4-9bde-2fc9d81c57b1\") " pod="openstack/neutron-db-create-hncd9" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.352048 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ec71b1d-86a6-4028-959d-6097b0bc6ed2-combined-ca-bundle\") pod \"keystone-db-sync-r89ww\" (UID: \"3ec71b1d-86a6-4028-959d-6097b0bc6ed2\") " pod="openstack/keystone-db-sync-r89ww" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.352098 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwc52\" (UniqueName: \"kubernetes.io/projected/054fda50-c263-45c4-9bde-2fc9d81c57b1-kube-api-access-mwc52\") pod \"neutron-db-create-hncd9\" (UID: \"054fda50-c263-45c4-9bde-2fc9d81c57b1\") " pod="openstack/neutron-db-create-hncd9" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.352159 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ec71b1d-86a6-4028-959d-6097b0bc6ed2-config-data\") pod \"keystone-db-sync-r89ww\" (UID: \"3ec71b1d-86a6-4028-959d-6097b0bc6ed2\") " pod="openstack/keystone-db-sync-r89ww" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.352466 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcztr\" (UniqueName: \"kubernetes.io/projected/3ec71b1d-86a6-4028-959d-6097b0bc6ed2-kube-api-access-tcztr\") pod \"keystone-db-sync-r89ww\" (UID: \"3ec71b1d-86a6-4028-959d-6097b0bc6ed2\") " pod="openstack/keystone-db-sync-r89ww" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.352630 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3c43e2c-68e2-4f5d-8c64-c9028a967f7f-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.367554 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7lvvv" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.375581 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-30a4-account-create-wpgb6"] Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.386668 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7ad8-account-create-vg4bf" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.453971 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wddnz\" (UniqueName: \"kubernetes.io/projected/2048b4c8-b4e2-4961-992e-4ab7104ca1d3-kube-api-access-wddnz\") pod \"cinder-30a4-account-create-wpgb6\" (UID: \"2048b4c8-b4e2-4961-992e-4ab7104ca1d3\") " pod="openstack/cinder-30a4-account-create-wpgb6" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.454049 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/054fda50-c263-45c4-9bde-2fc9d81c57b1-operator-scripts\") pod \"neutron-db-create-hncd9\" (UID: \"054fda50-c263-45c4-9bde-2fc9d81c57b1\") " pod="openstack/neutron-db-create-hncd9" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.454116 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2048b4c8-b4e2-4961-992e-4ab7104ca1d3-operator-scripts\") pod \"cinder-30a4-account-create-wpgb6\" (UID: \"2048b4c8-b4e2-4961-992e-4ab7104ca1d3\") " pod="openstack/cinder-30a4-account-create-wpgb6" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.454144 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ec71b1d-86a6-4028-959d-6097b0bc6ed2-combined-ca-bundle\") pod \"keystone-db-sync-r89ww\" (UID: \"3ec71b1d-86a6-4028-959d-6097b0bc6ed2\") " pod="openstack/keystone-db-sync-r89ww" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.454183 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwc52\" (UniqueName: \"kubernetes.io/projected/054fda50-c263-45c4-9bde-2fc9d81c57b1-kube-api-access-mwc52\") pod \"neutron-db-create-hncd9\" (UID: \"054fda50-c263-45c4-9bde-2fc9d81c57b1\") " pod="openstack/neutron-db-create-hncd9" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.454229 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ec71b1d-86a6-4028-959d-6097b0bc6ed2-config-data\") pod \"keystone-db-sync-r89ww\" (UID: \"3ec71b1d-86a6-4028-959d-6097b0bc6ed2\") " pod="openstack/keystone-db-sync-r89ww" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.454277 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcztr\" (UniqueName: \"kubernetes.io/projected/3ec71b1d-86a6-4028-959d-6097b0bc6ed2-kube-api-access-tcztr\") pod \"keystone-db-sync-r89ww\" (UID: \"3ec71b1d-86a6-4028-959d-6097b0bc6ed2\") " pod="openstack/keystone-db-sync-r89ww" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.455211 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/054fda50-c263-45c4-9bde-2fc9d81c57b1-operator-scripts\") pod \"neutron-db-create-hncd9\" (UID: \"054fda50-c263-45c4-9bde-2fc9d81c57b1\") " pod="openstack/neutron-db-create-hncd9" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.459657 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ec71b1d-86a6-4028-959d-6097b0bc6ed2-config-data\") pod \"keystone-db-sync-r89ww\" (UID: \"3ec71b1d-86a6-4028-959d-6097b0bc6ed2\") " pod="openstack/keystone-db-sync-r89ww" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.460082 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ec71b1d-86a6-4028-959d-6097b0bc6ed2-combined-ca-bundle\") pod \"keystone-db-sync-r89ww\" (UID: \"3ec71b1d-86a6-4028-959d-6097b0bc6ed2\") " pod="openstack/keystone-db-sync-r89ww" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.464947 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-v7ftf" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.472131 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-v7ftf" event={"ID":"a3c43e2c-68e2-4f5d-8c64-c9028a967f7f","Type":"ContainerDied","Data":"d017ddd9caf33980c86f7f4640fd06612d1134e0f4dd84137a826defdc248b44"} Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.472183 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d017ddd9caf33980c86f7f4640fd06612d1134e0f4dd84137a826defdc248b44" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.472998 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcztr\" (UniqueName: \"kubernetes.io/projected/3ec71b1d-86a6-4028-959d-6097b0bc6ed2-kube-api-access-tcztr\") pod \"keystone-db-sync-r89ww\" (UID: \"3ec71b1d-86a6-4028-959d-6097b0bc6ed2\") " pod="openstack/keystone-db-sync-r89ww" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.476810 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwc52\" (UniqueName: \"kubernetes.io/projected/054fda50-c263-45c4-9bde-2fc9d81c57b1-kube-api-access-mwc52\") pod \"neutron-db-create-hncd9\" (UID: \"054fda50-c263-45c4-9bde-2fc9d81c57b1\") " pod="openstack/neutron-db-create-hncd9" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.528587 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-d4d1-account-create-lphvh"] Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.530360 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d4d1-account-create-lphvh" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.536387 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.555529 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2048b4c8-b4e2-4961-992e-4ab7104ca1d3-operator-scripts\") pod \"cinder-30a4-account-create-wpgb6\" (UID: \"2048b4c8-b4e2-4961-992e-4ab7104ca1d3\") " pod="openstack/cinder-30a4-account-create-wpgb6" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.555692 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wddnz\" (UniqueName: \"kubernetes.io/projected/2048b4c8-b4e2-4961-992e-4ab7104ca1d3-kube-api-access-wddnz\") pod \"cinder-30a4-account-create-wpgb6\" (UID: \"2048b4c8-b4e2-4961-992e-4ab7104ca1d3\") " pod="openstack/cinder-30a4-account-create-wpgb6" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.559540 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2048b4c8-b4e2-4961-992e-4ab7104ca1d3-operator-scripts\") pod \"cinder-30a4-account-create-wpgb6\" (UID: \"2048b4c8-b4e2-4961-992e-4ab7104ca1d3\") " pod="openstack/cinder-30a4-account-create-wpgb6" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.563783 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d4d1-account-create-lphvh"] Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.585554 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wddnz\" (UniqueName: \"kubernetes.io/projected/2048b4c8-b4e2-4961-992e-4ab7104ca1d3-kube-api-access-wddnz\") pod \"cinder-30a4-account-create-wpgb6\" (UID: \"2048b4c8-b4e2-4961-992e-4ab7104ca1d3\") " pod="openstack/cinder-30a4-account-create-wpgb6" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.607721 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-r89ww" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.617824 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-hncd9" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.657031 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b67dn\" (UniqueName: \"kubernetes.io/projected/001d7afd-ffff-43e2-8463-3ebe29200b80-kube-api-access-b67dn\") pod \"neutron-d4d1-account-create-lphvh\" (UID: \"001d7afd-ffff-43e2-8463-3ebe29200b80\") " pod="openstack/neutron-d4d1-account-create-lphvh" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.657157 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/001d7afd-ffff-43e2-8463-3ebe29200b80-operator-scripts\") pod \"neutron-d4d1-account-create-lphvh\" (UID: \"001d7afd-ffff-43e2-8463-3ebe29200b80\") " pod="openstack/neutron-d4d1-account-create-lphvh" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.659683 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-30a4-account-create-wpgb6" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.773675 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b67dn\" (UniqueName: \"kubernetes.io/projected/001d7afd-ffff-43e2-8463-3ebe29200b80-kube-api-access-b67dn\") pod \"neutron-d4d1-account-create-lphvh\" (UID: \"001d7afd-ffff-43e2-8463-3ebe29200b80\") " pod="openstack/neutron-d4d1-account-create-lphvh" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.774722 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/001d7afd-ffff-43e2-8463-3ebe29200b80-operator-scripts\") pod \"neutron-d4d1-account-create-lphvh\" (UID: \"001d7afd-ffff-43e2-8463-3ebe29200b80\") " pod="openstack/neutron-d4d1-account-create-lphvh" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.775485 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/001d7afd-ffff-43e2-8463-3ebe29200b80-operator-scripts\") pod \"neutron-d4d1-account-create-lphvh\" (UID: \"001d7afd-ffff-43e2-8463-3ebe29200b80\") " pod="openstack/neutron-d4d1-account-create-lphvh" Nov 25 11:54:58 crc kubenswrapper[4706]: W1125 11:54:58.785281 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c2d1155_3724_4c94_a5fb_fcf88b53064e.slice/crio-d000c93d528e2e67802b8dd1dfb4d795d4adfce5e93ca794fe293bb41a322adf WatchSource:0}: Error finding container d000c93d528e2e67802b8dd1dfb4d795d4adfce5e93ca794fe293bb41a322adf: Status 404 returned error can't find the container with id d000c93d528e2e67802b8dd1dfb4d795d4adfce5e93ca794fe293bb41a322adf Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.791873 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7ad8-account-create-vg4bf"] Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.802912 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-rs7pp"] Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.826411 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b67dn\" (UniqueName: \"kubernetes.io/projected/001d7afd-ffff-43e2-8463-3ebe29200b80-kube-api-access-b67dn\") pod \"neutron-d4d1-account-create-lphvh\" (UID: \"001d7afd-ffff-43e2-8463-3ebe29200b80\") " pod="openstack/neutron-d4d1-account-create-lphvh" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.863471 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d4d1-account-create-lphvh" Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.944076 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-l9qhn"] Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.975846 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-dqgdx"] Nov 25 11:54:58 crc kubenswrapper[4706]: I1125 11:54:58.977442 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-dqgdx" Nov 25 11:54:59 crc kubenswrapper[4706]: I1125 11:54:59.030452 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-dqgdx"] Nov 25 11:54:59 crc kubenswrapper[4706]: I1125 11:54:59.089525 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d377cf62-3246-4d83-86b8-f55d354a2d5c-config\") pod \"dnsmasq-dns-74f6bcbc87-dqgdx\" (UID: \"d377cf62-3246-4d83-86b8-f55d354a2d5c\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dqgdx" Nov 25 11:54:59 crc kubenswrapper[4706]: I1125 11:54:59.090105 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d377cf62-3246-4d83-86b8-f55d354a2d5c-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-dqgdx\" (UID: \"d377cf62-3246-4d83-86b8-f55d354a2d5c\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dqgdx" Nov 25 11:54:59 crc kubenswrapper[4706]: I1125 11:54:59.090145 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d377cf62-3246-4d83-86b8-f55d354a2d5c-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-dqgdx\" (UID: \"d377cf62-3246-4d83-86b8-f55d354a2d5c\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dqgdx" Nov 25 11:54:59 crc kubenswrapper[4706]: I1125 11:54:59.090175 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d377cf62-3246-4d83-86b8-f55d354a2d5c-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-dqgdx\" (UID: \"d377cf62-3246-4d83-86b8-f55d354a2d5c\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dqgdx" Nov 25 11:54:59 crc kubenswrapper[4706]: I1125 11:54:59.090203 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssvsk\" (UniqueName: \"kubernetes.io/projected/d377cf62-3246-4d83-86b8-f55d354a2d5c-kube-api-access-ssvsk\") pod \"dnsmasq-dns-74f6bcbc87-dqgdx\" (UID: \"d377cf62-3246-4d83-86b8-f55d354a2d5c\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dqgdx" Nov 25 11:54:59 crc kubenswrapper[4706]: I1125 11:54:59.090267 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d377cf62-3246-4d83-86b8-f55d354a2d5c-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-dqgdx\" (UID: \"d377cf62-3246-4d83-86b8-f55d354a2d5c\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dqgdx" Nov 25 11:54:59 crc kubenswrapper[4706]: I1125 11:54:59.192440 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d377cf62-3246-4d83-86b8-f55d354a2d5c-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-dqgdx\" (UID: \"d377cf62-3246-4d83-86b8-f55d354a2d5c\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dqgdx" Nov 25 11:54:59 crc kubenswrapper[4706]: I1125 11:54:59.192553 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d377cf62-3246-4d83-86b8-f55d354a2d5c-config\") pod \"dnsmasq-dns-74f6bcbc87-dqgdx\" (UID: \"d377cf62-3246-4d83-86b8-f55d354a2d5c\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dqgdx" Nov 25 11:54:59 crc kubenswrapper[4706]: I1125 11:54:59.192575 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d377cf62-3246-4d83-86b8-f55d354a2d5c-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-dqgdx\" (UID: \"d377cf62-3246-4d83-86b8-f55d354a2d5c\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dqgdx" Nov 25 11:54:59 crc kubenswrapper[4706]: I1125 11:54:59.192598 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d377cf62-3246-4d83-86b8-f55d354a2d5c-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-dqgdx\" (UID: \"d377cf62-3246-4d83-86b8-f55d354a2d5c\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dqgdx" Nov 25 11:54:59 crc kubenswrapper[4706]: I1125 11:54:59.192623 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d377cf62-3246-4d83-86b8-f55d354a2d5c-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-dqgdx\" (UID: \"d377cf62-3246-4d83-86b8-f55d354a2d5c\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dqgdx" Nov 25 11:54:59 crc kubenswrapper[4706]: I1125 11:54:59.192643 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssvsk\" (UniqueName: \"kubernetes.io/projected/d377cf62-3246-4d83-86b8-f55d354a2d5c-kube-api-access-ssvsk\") pod \"dnsmasq-dns-74f6bcbc87-dqgdx\" (UID: \"d377cf62-3246-4d83-86b8-f55d354a2d5c\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dqgdx" Nov 25 11:54:59 crc kubenswrapper[4706]: I1125 11:54:59.193754 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d377cf62-3246-4d83-86b8-f55d354a2d5c-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-dqgdx\" (UID: \"d377cf62-3246-4d83-86b8-f55d354a2d5c\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dqgdx" Nov 25 11:54:59 crc kubenswrapper[4706]: I1125 11:54:59.194242 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d377cf62-3246-4d83-86b8-f55d354a2d5c-config\") pod \"dnsmasq-dns-74f6bcbc87-dqgdx\" (UID: \"d377cf62-3246-4d83-86b8-f55d354a2d5c\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dqgdx" Nov 25 11:54:59 crc kubenswrapper[4706]: I1125 11:54:59.194754 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d377cf62-3246-4d83-86b8-f55d354a2d5c-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-dqgdx\" (UID: \"d377cf62-3246-4d83-86b8-f55d354a2d5c\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dqgdx" Nov 25 11:54:59 crc kubenswrapper[4706]: I1125 11:54:59.195442 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d377cf62-3246-4d83-86b8-f55d354a2d5c-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-dqgdx\" (UID: \"d377cf62-3246-4d83-86b8-f55d354a2d5c\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dqgdx" Nov 25 11:54:59 crc kubenswrapper[4706]: I1125 11:54:59.198012 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d377cf62-3246-4d83-86b8-f55d354a2d5c-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-dqgdx\" (UID: \"d377cf62-3246-4d83-86b8-f55d354a2d5c\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dqgdx" Nov 25 11:54:59 crc kubenswrapper[4706]: I1125 11:54:59.213535 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-7lvvv"] Nov 25 11:54:59 crc kubenswrapper[4706]: I1125 11:54:59.233184 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssvsk\" (UniqueName: \"kubernetes.io/projected/d377cf62-3246-4d83-86b8-f55d354a2d5c-kube-api-access-ssvsk\") pod \"dnsmasq-dns-74f6bcbc87-dqgdx\" (UID: \"d377cf62-3246-4d83-86b8-f55d354a2d5c\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dqgdx" Nov 25 11:54:59 crc kubenswrapper[4706]: W1125 11:54:59.233830 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod562f2b9a_0768_4613_9711_8df28886eb32.slice/crio-7755bd152bb07846d549ae9580b922eefa20b8485d3632609b1340c90a2dd5cc WatchSource:0}: Error finding container 7755bd152bb07846d549ae9580b922eefa20b8485d3632609b1340c90a2dd5cc: Status 404 returned error can't find the container with id 7755bd152bb07846d549ae9580b922eefa20b8485d3632609b1340c90a2dd5cc Nov 25 11:54:59 crc kubenswrapper[4706]: I1125 11:54:59.415769 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-dqgdx" Nov 25 11:54:59 crc kubenswrapper[4706]: I1125 11:54:59.470907 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-hncd9"] Nov 25 11:54:59 crc kubenswrapper[4706]: I1125 11:54:59.476727 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7lvvv" event={"ID":"562f2b9a-0768-4613-9711-8df28886eb32","Type":"ContainerStarted","Data":"19caabc0e2660fdd5ec42d86887749bbd1c96c6b400d26e5fb5ae61ba61d0e35"} Nov 25 11:54:59 crc kubenswrapper[4706]: I1125 11:54:59.476771 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7lvvv" event={"ID":"562f2b9a-0768-4613-9711-8df28886eb32","Type":"ContainerStarted","Data":"7755bd152bb07846d549ae9580b922eefa20b8485d3632609b1340c90a2dd5cc"} Nov 25 11:54:59 crc kubenswrapper[4706]: W1125 11:54:59.480423 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ec71b1d_86a6_4028_959d_6097b0bc6ed2.slice/crio-80de91d696807acd6d136d26a62cf4ec9aee8e8ac933297ee5fe2efef9f01369 WatchSource:0}: Error finding container 80de91d696807acd6d136d26a62cf4ec9aee8e8ac933297ee5fe2efef9f01369: Status 404 returned error can't find the container with id 80de91d696807acd6d136d26a62cf4ec9aee8e8ac933297ee5fe2efef9f01369 Nov 25 11:54:59 crc kubenswrapper[4706]: I1125 11:54:59.482487 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-r89ww"] Nov 25 11:54:59 crc kubenswrapper[4706]: I1125 11:54:59.494396 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rs7pp" event={"ID":"4c2d1155-3724-4c94-a5fb-fcf88b53064e","Type":"ContainerStarted","Data":"a9b22d077dc8d7251a770820974f5fca5e31586208ed1fd3433467b82d3ded33"} Nov 25 11:54:59 crc kubenswrapper[4706]: I1125 11:54:59.494442 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rs7pp" event={"ID":"4c2d1155-3724-4c94-a5fb-fcf88b53064e","Type":"ContainerStarted","Data":"d000c93d528e2e67802b8dd1dfb4d795d4adfce5e93ca794fe293bb41a322adf"} Nov 25 11:54:59 crc kubenswrapper[4706]: I1125 11:54:59.499024 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-l9qhn" podUID="7857166d-6bfe-4740-a310-ce20dc486ab2" containerName="dnsmasq-dns" containerID="cri-o://6a4d713132e0cf289edb560b496bfea4f27dd015a04d73a413ec6d4a51f9726d" gracePeriod=10 Nov 25 11:54:59 crc kubenswrapper[4706]: I1125 11:54:59.500032 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7ad8-account-create-vg4bf" event={"ID":"a3b54223-dba3-409f-a6dc-fc371e46ab31","Type":"ContainerStarted","Data":"9c58be95ca4b624911c56f14e8fc3aa990af582ea2f1f7f42502ceb6656e23da"} Nov 25 11:54:59 crc kubenswrapper[4706]: I1125 11:54:59.500092 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7ad8-account-create-vg4bf" event={"ID":"a3b54223-dba3-409f-a6dc-fc371e46ab31","Type":"ContainerStarted","Data":"05692300aef0cdf83efb73fa486138cf796b482656384b068df92d84c612c02c"} Nov 25 11:54:59 crc kubenswrapper[4706]: I1125 11:54:59.504742 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-7lvvv" podStartSLOduration=2.50472292 podStartE2EDuration="2.50472292s" podCreationTimestamp="2025-11-25 11:54:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:54:59.491117718 +0000 UTC m=+1108.405675099" watchObservedRunningTime="2025-11-25 11:54:59.50472292 +0000 UTC m=+1108.419280301" Nov 25 11:54:59 crc kubenswrapper[4706]: I1125 11:54:59.519602 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-rs7pp" podStartSLOduration=2.519579024 podStartE2EDuration="2.519579024s" podCreationTimestamp="2025-11-25 11:54:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:54:59.511128761 +0000 UTC m=+1108.425686142" watchObservedRunningTime="2025-11-25 11:54:59.519579024 +0000 UTC m=+1108.434136405" Nov 25 11:54:59 crc kubenswrapper[4706]: I1125 11:54:59.525788 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-7ad8-account-create-vg4bf" podStartSLOduration=1.52575945 podStartE2EDuration="1.52575945s" podCreationTimestamp="2025-11-25 11:54:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:54:59.524348984 +0000 UTC m=+1108.438906365" watchObservedRunningTime="2025-11-25 11:54:59.52575945 +0000 UTC m=+1108.440316831" Nov 25 11:54:59 crc kubenswrapper[4706]: I1125 11:54:59.596308 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-30a4-account-create-wpgb6"] Nov 25 11:54:59 crc kubenswrapper[4706]: I1125 11:54:59.836197 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d4d1-account-create-lphvh"] Nov 25 11:54:59 crc kubenswrapper[4706]: W1125 11:54:59.880522 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod001d7afd_ffff_43e2_8463_3ebe29200b80.slice/crio-a15903d77eb33e05fa9a41753a57f4e896be2b271512ec7f8c3e7e8d334eca7f WatchSource:0}: Error finding container a15903d77eb33e05fa9a41753a57f4e896be2b271512ec7f8c3e7e8d334eca7f: Status 404 returned error can't find the container with id a15903d77eb33e05fa9a41753a57f4e896be2b271512ec7f8c3e7e8d334eca7f Nov 25 11:54:59 crc kubenswrapper[4706]: I1125 11:54:59.986408 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-dqgdx"] Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.198221 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-l9qhn" Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.326887 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7857166d-6bfe-4740-a310-ce20dc486ab2-dns-svc\") pod \"7857166d-6bfe-4740-a310-ce20dc486ab2\" (UID: \"7857166d-6bfe-4740-a310-ce20dc486ab2\") " Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.327359 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7857166d-6bfe-4740-a310-ce20dc486ab2-config\") pod \"7857166d-6bfe-4740-a310-ce20dc486ab2\" (UID: \"7857166d-6bfe-4740-a310-ce20dc486ab2\") " Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.327426 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7857166d-6bfe-4740-a310-ce20dc486ab2-dns-swift-storage-0\") pod \"7857166d-6bfe-4740-a310-ce20dc486ab2\" (UID: \"7857166d-6bfe-4740-a310-ce20dc486ab2\") " Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.327507 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7857166d-6bfe-4740-a310-ce20dc486ab2-ovsdbserver-nb\") pod \"7857166d-6bfe-4740-a310-ce20dc486ab2\" (UID: \"7857166d-6bfe-4740-a310-ce20dc486ab2\") " Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.327532 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7857166d-6bfe-4740-a310-ce20dc486ab2-ovsdbserver-sb\") pod \"7857166d-6bfe-4740-a310-ce20dc486ab2\" (UID: \"7857166d-6bfe-4740-a310-ce20dc486ab2\") " Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.327564 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8djjf\" (UniqueName: \"kubernetes.io/projected/7857166d-6bfe-4740-a310-ce20dc486ab2-kube-api-access-8djjf\") pod \"7857166d-6bfe-4740-a310-ce20dc486ab2\" (UID: \"7857166d-6bfe-4740-a310-ce20dc486ab2\") " Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.348422 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7857166d-6bfe-4740-a310-ce20dc486ab2-kube-api-access-8djjf" (OuterVolumeSpecName: "kube-api-access-8djjf") pod "7857166d-6bfe-4740-a310-ce20dc486ab2" (UID: "7857166d-6bfe-4740-a310-ce20dc486ab2"). InnerVolumeSpecName "kube-api-access-8djjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.382809 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7857166d-6bfe-4740-a310-ce20dc486ab2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7857166d-6bfe-4740-a310-ce20dc486ab2" (UID: "7857166d-6bfe-4740-a310-ce20dc486ab2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.385708 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7857166d-6bfe-4740-a310-ce20dc486ab2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7857166d-6bfe-4740-a310-ce20dc486ab2" (UID: "7857166d-6bfe-4740-a310-ce20dc486ab2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.394133 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7857166d-6bfe-4740-a310-ce20dc486ab2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7857166d-6bfe-4740-a310-ce20dc486ab2" (UID: "7857166d-6bfe-4740-a310-ce20dc486ab2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.396405 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7857166d-6bfe-4740-a310-ce20dc486ab2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7857166d-6bfe-4740-a310-ce20dc486ab2" (UID: "7857166d-6bfe-4740-a310-ce20dc486ab2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.402036 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7857166d-6bfe-4740-a310-ce20dc486ab2-config" (OuterVolumeSpecName: "config") pod "7857166d-6bfe-4740-a310-ce20dc486ab2" (UID: "7857166d-6bfe-4740-a310-ce20dc486ab2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.429733 4706 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7857166d-6bfe-4740-a310-ce20dc486ab2-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.429766 4706 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7857166d-6bfe-4740-a310-ce20dc486ab2-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.429776 4706 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7857166d-6bfe-4740-a310-ce20dc486ab2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.429786 4706 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7857166d-6bfe-4740-a310-ce20dc486ab2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.429795 4706 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7857166d-6bfe-4740-a310-ce20dc486ab2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.429803 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8djjf\" (UniqueName: \"kubernetes.io/projected/7857166d-6bfe-4740-a310-ce20dc486ab2-kube-api-access-8djjf\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.508490 4706 generic.go:334] "Generic (PLEG): container finished" podID="562f2b9a-0768-4613-9711-8df28886eb32" containerID="19caabc0e2660fdd5ec42d86887749bbd1c96c6b400d26e5fb5ae61ba61d0e35" exitCode=0 Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.508555 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7lvvv" event={"ID":"562f2b9a-0768-4613-9711-8df28886eb32","Type":"ContainerDied","Data":"19caabc0e2660fdd5ec42d86887749bbd1c96c6b400d26e5fb5ae61ba61d0e35"} Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.509948 4706 generic.go:334] "Generic (PLEG): container finished" podID="054fda50-c263-45c4-9bde-2fc9d81c57b1" containerID="3d732de07d9f48d070985cccb3531cd141efc1c2c79f1004e80d44efc990f7ce" exitCode=0 Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.510030 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-hncd9" event={"ID":"054fda50-c263-45c4-9bde-2fc9d81c57b1","Type":"ContainerDied","Data":"3d732de07d9f48d070985cccb3531cd141efc1c2c79f1004e80d44efc990f7ce"} Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.510061 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-hncd9" event={"ID":"054fda50-c263-45c4-9bde-2fc9d81c57b1","Type":"ContainerStarted","Data":"7d8e0d8ff44969c7041a2b6cb42077e797bcae2a830cfaf98c08022f393e03ce"} Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.512026 4706 generic.go:334] "Generic (PLEG): container finished" podID="4c2d1155-3724-4c94-a5fb-fcf88b53064e" containerID="a9b22d077dc8d7251a770820974f5fca5e31586208ed1fd3433467b82d3ded33" exitCode=0 Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.512107 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rs7pp" event={"ID":"4c2d1155-3724-4c94-a5fb-fcf88b53064e","Type":"ContainerDied","Data":"a9b22d077dc8d7251a770820974f5fca5e31586208ed1fd3433467b82d3ded33"} Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.514018 4706 generic.go:334] "Generic (PLEG): container finished" podID="7857166d-6bfe-4740-a310-ce20dc486ab2" containerID="6a4d713132e0cf289edb560b496bfea4f27dd015a04d73a413ec6d4a51f9726d" exitCode=0 Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.514079 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-l9qhn" event={"ID":"7857166d-6bfe-4740-a310-ce20dc486ab2","Type":"ContainerDied","Data":"6a4d713132e0cf289edb560b496bfea4f27dd015a04d73a413ec6d4a51f9726d"} Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.514113 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-l9qhn" event={"ID":"7857166d-6bfe-4740-a310-ce20dc486ab2","Type":"ContainerDied","Data":"8d7645da1ea5649a4e668bb14135fdc1fca4a4a15edf026218e41a2f852e50e5"} Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.514113 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-l9qhn" Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.514134 4706 scope.go:117] "RemoveContainer" containerID="6a4d713132e0cf289edb560b496bfea4f27dd015a04d73a413ec6d4a51f9726d" Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.515752 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-r89ww" event={"ID":"3ec71b1d-86a6-4028-959d-6097b0bc6ed2","Type":"ContainerStarted","Data":"80de91d696807acd6d136d26a62cf4ec9aee8e8ac933297ee5fe2efef9f01369"} Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.518054 4706 generic.go:334] "Generic (PLEG): container finished" podID="d377cf62-3246-4d83-86b8-f55d354a2d5c" containerID="c6335dfa87a6373df916c4dcc0ec12ad7ba930ded5450469edef5eb7c56e7345" exitCode=0 Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.518152 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-dqgdx" event={"ID":"d377cf62-3246-4d83-86b8-f55d354a2d5c","Type":"ContainerDied","Data":"c6335dfa87a6373df916c4dcc0ec12ad7ba930ded5450469edef5eb7c56e7345"} Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.518208 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-dqgdx" event={"ID":"d377cf62-3246-4d83-86b8-f55d354a2d5c","Type":"ContainerStarted","Data":"4982eba18b74496c77af6db9130a79de0795bbbcd90eac419c2d95d3b10f1919"} Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.520269 4706 generic.go:334] "Generic (PLEG): container finished" podID="a3b54223-dba3-409f-a6dc-fc371e46ab31" containerID="9c58be95ca4b624911c56f14e8fc3aa990af582ea2f1f7f42502ceb6656e23da" exitCode=0 Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.520332 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7ad8-account-create-vg4bf" event={"ID":"a3b54223-dba3-409f-a6dc-fc371e46ab31","Type":"ContainerDied","Data":"9c58be95ca4b624911c56f14e8fc3aa990af582ea2f1f7f42502ceb6656e23da"} Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.521540 4706 generic.go:334] "Generic (PLEG): container finished" podID="2048b4c8-b4e2-4961-992e-4ab7104ca1d3" containerID="708280f842bd81c3ef09736c2d734c9f5267b8d7e3526224830848e6d3aed37c" exitCode=0 Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.521591 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-30a4-account-create-wpgb6" event={"ID":"2048b4c8-b4e2-4961-992e-4ab7104ca1d3","Type":"ContainerDied","Data":"708280f842bd81c3ef09736c2d734c9f5267b8d7e3526224830848e6d3aed37c"} Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.521614 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-30a4-account-create-wpgb6" event={"ID":"2048b4c8-b4e2-4961-992e-4ab7104ca1d3","Type":"ContainerStarted","Data":"05ebe5d87fceeefcd93e4e331d3fafb0386db84bea61ca30124239b428ab0f09"} Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.524514 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d4d1-account-create-lphvh" event={"ID":"001d7afd-ffff-43e2-8463-3ebe29200b80","Type":"ContainerStarted","Data":"fd47cc12bff940b7738429622128cd1a4a7da6827de28a0cd21b35b4bc4a1a19"} Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.524575 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d4d1-account-create-lphvh" event={"ID":"001d7afd-ffff-43e2-8463-3ebe29200b80","Type":"ContainerStarted","Data":"a15903d77eb33e05fa9a41753a57f4e896be2b271512ec7f8c3e7e8d334eca7f"} Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.543081 4706 scope.go:117] "RemoveContainer" containerID="07bc7dccd48883dd5459a6a81099785eec9ac893b94bbf213ba9e3ba9df81e02" Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.584455 4706 scope.go:117] "RemoveContainer" containerID="6a4d713132e0cf289edb560b496bfea4f27dd015a04d73a413ec6d4a51f9726d" Nov 25 11:55:00 crc kubenswrapper[4706]: E1125 11:55:00.585018 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a4d713132e0cf289edb560b496bfea4f27dd015a04d73a413ec6d4a51f9726d\": container with ID starting with 6a4d713132e0cf289edb560b496bfea4f27dd015a04d73a413ec6d4a51f9726d not found: ID does not exist" containerID="6a4d713132e0cf289edb560b496bfea4f27dd015a04d73a413ec6d4a51f9726d" Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.585087 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a4d713132e0cf289edb560b496bfea4f27dd015a04d73a413ec6d4a51f9726d"} err="failed to get container status \"6a4d713132e0cf289edb560b496bfea4f27dd015a04d73a413ec6d4a51f9726d\": rpc error: code = NotFound desc = could not find container \"6a4d713132e0cf289edb560b496bfea4f27dd015a04d73a413ec6d4a51f9726d\": container with ID starting with 6a4d713132e0cf289edb560b496bfea4f27dd015a04d73a413ec6d4a51f9726d not found: ID does not exist" Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.585129 4706 scope.go:117] "RemoveContainer" containerID="07bc7dccd48883dd5459a6a81099785eec9ac893b94bbf213ba9e3ba9df81e02" Nov 25 11:55:00 crc kubenswrapper[4706]: E1125 11:55:00.585478 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07bc7dccd48883dd5459a6a81099785eec9ac893b94bbf213ba9e3ba9df81e02\": container with ID starting with 07bc7dccd48883dd5459a6a81099785eec9ac893b94bbf213ba9e3ba9df81e02 not found: ID does not exist" containerID="07bc7dccd48883dd5459a6a81099785eec9ac893b94bbf213ba9e3ba9df81e02" Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.585547 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07bc7dccd48883dd5459a6a81099785eec9ac893b94bbf213ba9e3ba9df81e02"} err="failed to get container status \"07bc7dccd48883dd5459a6a81099785eec9ac893b94bbf213ba9e3ba9df81e02\": rpc error: code = NotFound desc = could not find container \"07bc7dccd48883dd5459a6a81099785eec9ac893b94bbf213ba9e3ba9df81e02\": container with ID starting with 07bc7dccd48883dd5459a6a81099785eec9ac893b94bbf213ba9e3ba9df81e02 not found: ID does not exist" Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.646217 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-d4d1-account-create-lphvh" podStartSLOduration=2.646187738 podStartE2EDuration="2.646187738s" podCreationTimestamp="2025-11-25 11:54:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:55:00.622313277 +0000 UTC m=+1109.536870658" watchObservedRunningTime="2025-11-25 11:55:00.646187738 +0000 UTC m=+1109.560745109" Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.671270 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-l9qhn"] Nov 25 11:55:00 crc kubenswrapper[4706]: I1125 11:55:00.678226 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-l9qhn"] Nov 25 11:55:01 crc kubenswrapper[4706]: I1125 11:55:01.535614 4706 generic.go:334] "Generic (PLEG): container finished" podID="001d7afd-ffff-43e2-8463-3ebe29200b80" containerID="fd47cc12bff940b7738429622128cd1a4a7da6827de28a0cd21b35b4bc4a1a19" exitCode=0 Nov 25 11:55:01 crc kubenswrapper[4706]: I1125 11:55:01.535685 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d4d1-account-create-lphvh" event={"ID":"001d7afd-ffff-43e2-8463-3ebe29200b80","Type":"ContainerDied","Data":"fd47cc12bff940b7738429622128cd1a4a7da6827de28a0cd21b35b4bc4a1a19"} Nov 25 11:55:01 crc kubenswrapper[4706]: I1125 11:55:01.541576 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-dqgdx" event={"ID":"d377cf62-3246-4d83-86b8-f55d354a2d5c","Type":"ContainerStarted","Data":"f1b3b630b5578d49173f9161e395731350d90063332754fe96cefc07384bf022"} Nov 25 11:55:01 crc kubenswrapper[4706]: I1125 11:55:01.541623 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-dqgdx" Nov 25 11:55:01 crc kubenswrapper[4706]: I1125 11:55:01.582878 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-dqgdx" podStartSLOduration=3.5828338410000002 podStartE2EDuration="3.582833841s" podCreationTimestamp="2025-11-25 11:54:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:55:01.577336993 +0000 UTC m=+1110.491894374" watchObservedRunningTime="2025-11-25 11:55:01.582833841 +0000 UTC m=+1110.497391222" Nov 25 11:55:01 crc kubenswrapper[4706]: I1125 11:55:01.952563 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7857166d-6bfe-4740-a310-ce20dc486ab2" path="/var/lib/kubelet/pods/7857166d-6bfe-4740-a310-ce20dc486ab2/volumes" Nov 25 11:55:01 crc kubenswrapper[4706]: I1125 11:55:01.969894 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7ad8-account-create-vg4bf" Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.113973 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-30a4-account-create-wpgb6" Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.119483 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-hncd9" Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.123986 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7lvvv" Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.174896 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3b54223-dba3-409f-a6dc-fc371e46ab31-operator-scripts\") pod \"a3b54223-dba3-409f-a6dc-fc371e46ab31\" (UID: \"a3b54223-dba3-409f-a6dc-fc371e46ab31\") " Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.174962 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzg9s\" (UniqueName: \"kubernetes.io/projected/a3b54223-dba3-409f-a6dc-fc371e46ab31-kube-api-access-hzg9s\") pod \"a3b54223-dba3-409f-a6dc-fc371e46ab31\" (UID: \"a3b54223-dba3-409f-a6dc-fc371e46ab31\") " Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.179604 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3b54223-dba3-409f-a6dc-fc371e46ab31-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a3b54223-dba3-409f-a6dc-fc371e46ab31" (UID: "a3b54223-dba3-409f-a6dc-fc371e46ab31"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.186272 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rs7pp" Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.202141 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3b54223-dba3-409f-a6dc-fc371e46ab31-kube-api-access-hzg9s" (OuterVolumeSpecName: "kube-api-access-hzg9s") pod "a3b54223-dba3-409f-a6dc-fc371e46ab31" (UID: "a3b54223-dba3-409f-a6dc-fc371e46ab31"). InnerVolumeSpecName "kube-api-access-hzg9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.276771 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwc52\" (UniqueName: \"kubernetes.io/projected/054fda50-c263-45c4-9bde-2fc9d81c57b1-kube-api-access-mwc52\") pod \"054fda50-c263-45c4-9bde-2fc9d81c57b1\" (UID: \"054fda50-c263-45c4-9bde-2fc9d81c57b1\") " Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.276827 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/562f2b9a-0768-4613-9711-8df28886eb32-operator-scripts\") pod \"562f2b9a-0768-4613-9711-8df28886eb32\" (UID: \"562f2b9a-0768-4613-9711-8df28886eb32\") " Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.276851 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/054fda50-c263-45c4-9bde-2fc9d81c57b1-operator-scripts\") pod \"054fda50-c263-45c4-9bde-2fc9d81c57b1\" (UID: \"054fda50-c263-45c4-9bde-2fc9d81c57b1\") " Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.276871 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drfrk\" (UniqueName: \"kubernetes.io/projected/562f2b9a-0768-4613-9711-8df28886eb32-kube-api-access-drfrk\") pod \"562f2b9a-0768-4613-9711-8df28886eb32\" (UID: \"562f2b9a-0768-4613-9711-8df28886eb32\") " Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.276941 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wddnz\" (UniqueName: \"kubernetes.io/projected/2048b4c8-b4e2-4961-992e-4ab7104ca1d3-kube-api-access-wddnz\") pod \"2048b4c8-b4e2-4961-992e-4ab7104ca1d3\" (UID: \"2048b4c8-b4e2-4961-992e-4ab7104ca1d3\") " Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.276966 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2048b4c8-b4e2-4961-992e-4ab7104ca1d3-operator-scripts\") pod \"2048b4c8-b4e2-4961-992e-4ab7104ca1d3\" (UID: \"2048b4c8-b4e2-4961-992e-4ab7104ca1d3\") " Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.277354 4706 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3b54223-dba3-409f-a6dc-fc371e46ab31-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.277367 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzg9s\" (UniqueName: \"kubernetes.io/projected/a3b54223-dba3-409f-a6dc-fc371e46ab31-kube-api-access-hzg9s\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.277764 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2048b4c8-b4e2-4961-992e-4ab7104ca1d3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2048b4c8-b4e2-4961-992e-4ab7104ca1d3" (UID: "2048b4c8-b4e2-4961-992e-4ab7104ca1d3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.277661 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/054fda50-c263-45c4-9bde-2fc9d81c57b1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "054fda50-c263-45c4-9bde-2fc9d81c57b1" (UID: "054fda50-c263-45c4-9bde-2fc9d81c57b1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.277837 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/562f2b9a-0768-4613-9711-8df28886eb32-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "562f2b9a-0768-4613-9711-8df28886eb32" (UID: "562f2b9a-0768-4613-9711-8df28886eb32"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.280750 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/054fda50-c263-45c4-9bde-2fc9d81c57b1-kube-api-access-mwc52" (OuterVolumeSpecName: "kube-api-access-mwc52") pod "054fda50-c263-45c4-9bde-2fc9d81c57b1" (UID: "054fda50-c263-45c4-9bde-2fc9d81c57b1"). InnerVolumeSpecName "kube-api-access-mwc52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.281440 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/562f2b9a-0768-4613-9711-8df28886eb32-kube-api-access-drfrk" (OuterVolumeSpecName: "kube-api-access-drfrk") pod "562f2b9a-0768-4613-9711-8df28886eb32" (UID: "562f2b9a-0768-4613-9711-8df28886eb32"). InnerVolumeSpecName "kube-api-access-drfrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.283220 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2048b4c8-b4e2-4961-992e-4ab7104ca1d3-kube-api-access-wddnz" (OuterVolumeSpecName: "kube-api-access-wddnz") pod "2048b4c8-b4e2-4961-992e-4ab7104ca1d3" (UID: "2048b4c8-b4e2-4961-992e-4ab7104ca1d3"). InnerVolumeSpecName "kube-api-access-wddnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.378338 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pd88j\" (UniqueName: \"kubernetes.io/projected/4c2d1155-3724-4c94-a5fb-fcf88b53064e-kube-api-access-pd88j\") pod \"4c2d1155-3724-4c94-a5fb-fcf88b53064e\" (UID: \"4c2d1155-3724-4c94-a5fb-fcf88b53064e\") " Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.378977 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c2d1155-3724-4c94-a5fb-fcf88b53064e-operator-scripts\") pod \"4c2d1155-3724-4c94-a5fb-fcf88b53064e\" (UID: \"4c2d1155-3724-4c94-a5fb-fcf88b53064e\") " Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.379391 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c2d1155-3724-4c94-a5fb-fcf88b53064e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c2d1155-3724-4c94-a5fb-fcf88b53064e" (UID: "4c2d1155-3724-4c94-a5fb-fcf88b53064e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.379586 4706 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c2d1155-3724-4c94-a5fb-fcf88b53064e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.379656 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwc52\" (UniqueName: \"kubernetes.io/projected/054fda50-c263-45c4-9bde-2fc9d81c57b1-kube-api-access-mwc52\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.379738 4706 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/562f2b9a-0768-4613-9711-8df28886eb32-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.379798 4706 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/054fda50-c263-45c4-9bde-2fc9d81c57b1-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.379853 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drfrk\" (UniqueName: \"kubernetes.io/projected/562f2b9a-0768-4613-9711-8df28886eb32-kube-api-access-drfrk\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.379921 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wddnz\" (UniqueName: \"kubernetes.io/projected/2048b4c8-b4e2-4961-992e-4ab7104ca1d3-kube-api-access-wddnz\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.379980 4706 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2048b4c8-b4e2-4961-992e-4ab7104ca1d3-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.382010 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c2d1155-3724-4c94-a5fb-fcf88b53064e-kube-api-access-pd88j" (OuterVolumeSpecName: "kube-api-access-pd88j") pod "4c2d1155-3724-4c94-a5fb-fcf88b53064e" (UID: "4c2d1155-3724-4c94-a5fb-fcf88b53064e"). InnerVolumeSpecName "kube-api-access-pd88j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.489265 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pd88j\" (UniqueName: \"kubernetes.io/projected/4c2d1155-3724-4c94-a5fb-fcf88b53064e-kube-api-access-pd88j\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.566846 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-hncd9" Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.569806 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-hncd9" event={"ID":"054fda50-c263-45c4-9bde-2fc9d81c57b1","Type":"ContainerDied","Data":"7d8e0d8ff44969c7041a2b6cb42077e797bcae2a830cfaf98c08022f393e03ce"} Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.569853 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d8e0d8ff44969c7041a2b6cb42077e797bcae2a830cfaf98c08022f393e03ce" Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.572183 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rs7pp" event={"ID":"4c2d1155-3724-4c94-a5fb-fcf88b53064e","Type":"ContainerDied","Data":"d000c93d528e2e67802b8dd1dfb4d795d4adfce5e93ca794fe293bb41a322adf"} Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.572233 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d000c93d528e2e67802b8dd1dfb4d795d4adfce5e93ca794fe293bb41a322adf" Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.572209 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rs7pp" Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.573640 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7ad8-account-create-vg4bf" event={"ID":"a3b54223-dba3-409f-a6dc-fc371e46ab31","Type":"ContainerDied","Data":"05692300aef0cdf83efb73fa486138cf796b482656384b068df92d84c612c02c"} Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.573672 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05692300aef0cdf83efb73fa486138cf796b482656384b068df92d84c612c02c" Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.573704 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7ad8-account-create-vg4bf" Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.575442 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7lvvv" Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.575502 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7lvvv" event={"ID":"562f2b9a-0768-4613-9711-8df28886eb32","Type":"ContainerDied","Data":"7755bd152bb07846d549ae9580b922eefa20b8485d3632609b1340c90a2dd5cc"} Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.575559 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7755bd152bb07846d549ae9580b922eefa20b8485d3632609b1340c90a2dd5cc" Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.583670 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-30a4-account-create-wpgb6" event={"ID":"2048b4c8-b4e2-4961-992e-4ab7104ca1d3","Type":"ContainerDied","Data":"05ebe5d87fceeefcd93e4e331d3fafb0386db84bea61ca30124239b428ab0f09"} Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.583765 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05ebe5d87fceeefcd93e4e331d3fafb0386db84bea61ca30124239b428ab0f09" Nov 25 11:55:02 crc kubenswrapper[4706]: I1125 11:55:02.583803 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-30a4-account-create-wpgb6" Nov 25 11:55:09 crc kubenswrapper[4706]: I1125 11:55:09.417529 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74f6bcbc87-dqgdx" Nov 25 11:55:09 crc kubenswrapper[4706]: I1125 11:55:09.494230 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-vjh52"] Nov 25 11:55:09 crc kubenswrapper[4706]: I1125 11:55:09.494616 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-vjh52" podUID="679831d3-04d7-4b95-8690-837698ce07f3" containerName="dnsmasq-dns" containerID="cri-o://2e6258e8f7c46131b8a759c0c9b3f24bd923e82de32b14ee29fc527c4524773f" gracePeriod=10 Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.031366 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d4d1-account-create-lphvh" Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.162850 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b67dn\" (UniqueName: \"kubernetes.io/projected/001d7afd-ffff-43e2-8463-3ebe29200b80-kube-api-access-b67dn\") pod \"001d7afd-ffff-43e2-8463-3ebe29200b80\" (UID: \"001d7afd-ffff-43e2-8463-3ebe29200b80\") " Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.163083 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/001d7afd-ffff-43e2-8463-3ebe29200b80-operator-scripts\") pod \"001d7afd-ffff-43e2-8463-3ebe29200b80\" (UID: \"001d7afd-ffff-43e2-8463-3ebe29200b80\") " Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.164216 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/001d7afd-ffff-43e2-8463-3ebe29200b80-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "001d7afd-ffff-43e2-8463-3ebe29200b80" (UID: "001d7afd-ffff-43e2-8463-3ebe29200b80"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.169036 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/001d7afd-ffff-43e2-8463-3ebe29200b80-kube-api-access-b67dn" (OuterVolumeSpecName: "kube-api-access-b67dn") pod "001d7afd-ffff-43e2-8463-3ebe29200b80" (UID: "001d7afd-ffff-43e2-8463-3ebe29200b80"). InnerVolumeSpecName "kube-api-access-b67dn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.264470 4706 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/001d7afd-ffff-43e2-8463-3ebe29200b80-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.264717 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b67dn\" (UniqueName: \"kubernetes.io/projected/001d7afd-ffff-43e2-8463-3ebe29200b80-kube-api-access-b67dn\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:12 crc kubenswrapper[4706]: E1125 11:55:12.272588 4706 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-keystone:current-podified" Nov 25 11:55:12 crc kubenswrapper[4706]: E1125 11:55:12.272728 4706 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:keystone-db-sync,Image:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,Command:[/bin/bash],Args:[-c keystone-manage db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/keystone/keystone.conf,SubPath:keystone.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tcztr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42425,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42425,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-db-sync-r89ww_openstack(3ec71b1d-86a6-4028-959d-6097b0bc6ed2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 11:55:12 crc kubenswrapper[4706]: E1125 11:55:12.273901 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/keystone-db-sync-r89ww" podUID="3ec71b1d-86a6-4028-959d-6097b0bc6ed2" Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.399409 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-vjh52" Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.569323 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxpz2\" (UniqueName: \"kubernetes.io/projected/679831d3-04d7-4b95-8690-837698ce07f3-kube-api-access-dxpz2\") pod \"679831d3-04d7-4b95-8690-837698ce07f3\" (UID: \"679831d3-04d7-4b95-8690-837698ce07f3\") " Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.569733 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/679831d3-04d7-4b95-8690-837698ce07f3-ovsdbserver-sb\") pod \"679831d3-04d7-4b95-8690-837698ce07f3\" (UID: \"679831d3-04d7-4b95-8690-837698ce07f3\") " Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.570058 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/679831d3-04d7-4b95-8690-837698ce07f3-dns-svc\") pod \"679831d3-04d7-4b95-8690-837698ce07f3\" (UID: \"679831d3-04d7-4b95-8690-837698ce07f3\") " Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.570244 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/679831d3-04d7-4b95-8690-837698ce07f3-ovsdbserver-nb\") pod \"679831d3-04d7-4b95-8690-837698ce07f3\" (UID: \"679831d3-04d7-4b95-8690-837698ce07f3\") " Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.570397 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/679831d3-04d7-4b95-8690-837698ce07f3-config\") pod \"679831d3-04d7-4b95-8690-837698ce07f3\" (UID: \"679831d3-04d7-4b95-8690-837698ce07f3\") " Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.575516 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/679831d3-04d7-4b95-8690-837698ce07f3-kube-api-access-dxpz2" (OuterVolumeSpecName: "kube-api-access-dxpz2") pod "679831d3-04d7-4b95-8690-837698ce07f3" (UID: "679831d3-04d7-4b95-8690-837698ce07f3"). InnerVolumeSpecName "kube-api-access-dxpz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.612910 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/679831d3-04d7-4b95-8690-837698ce07f3-config" (OuterVolumeSpecName: "config") pod "679831d3-04d7-4b95-8690-837698ce07f3" (UID: "679831d3-04d7-4b95-8690-837698ce07f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.614046 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/679831d3-04d7-4b95-8690-837698ce07f3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "679831d3-04d7-4b95-8690-837698ce07f3" (UID: "679831d3-04d7-4b95-8690-837698ce07f3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.614722 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/679831d3-04d7-4b95-8690-837698ce07f3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "679831d3-04d7-4b95-8690-837698ce07f3" (UID: "679831d3-04d7-4b95-8690-837698ce07f3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.614810 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/679831d3-04d7-4b95-8690-837698ce07f3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "679831d3-04d7-4b95-8690-837698ce07f3" (UID: "679831d3-04d7-4b95-8690-837698ce07f3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.671770 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxpz2\" (UniqueName: \"kubernetes.io/projected/679831d3-04d7-4b95-8690-837698ce07f3-kube-api-access-dxpz2\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.671814 4706 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/679831d3-04d7-4b95-8690-837698ce07f3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.671826 4706 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/679831d3-04d7-4b95-8690-837698ce07f3-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.671836 4706 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/679831d3-04d7-4b95-8690-837698ce07f3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.671845 4706 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/679831d3-04d7-4b95-8690-837698ce07f3-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.693782 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d4d1-account-create-lphvh" event={"ID":"001d7afd-ffff-43e2-8463-3ebe29200b80","Type":"ContainerDied","Data":"a15903d77eb33e05fa9a41753a57f4e896be2b271512ec7f8c3e7e8d334eca7f"} Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.693837 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a15903d77eb33e05fa9a41753a57f4e896be2b271512ec7f8c3e7e8d334eca7f" Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.693798 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d4d1-account-create-lphvh" Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.695434 4706 generic.go:334] "Generic (PLEG): container finished" podID="679831d3-04d7-4b95-8690-837698ce07f3" containerID="2e6258e8f7c46131b8a759c0c9b3f24bd923e82de32b14ee29fc527c4524773f" exitCode=0 Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.695512 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-vjh52" Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.695531 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-vjh52" event={"ID":"679831d3-04d7-4b95-8690-837698ce07f3","Type":"ContainerDied","Data":"2e6258e8f7c46131b8a759c0c9b3f24bd923e82de32b14ee29fc527c4524773f"} Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.695570 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-vjh52" event={"ID":"679831d3-04d7-4b95-8690-837698ce07f3","Type":"ContainerDied","Data":"c3f6a5f679463ef34538b9dab611b7a615c6fdd9f040ca1556fec027f2e42735"} Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.695589 4706 scope.go:117] "RemoveContainer" containerID="2e6258e8f7c46131b8a759c0c9b3f24bd923e82de32b14ee29fc527c4524773f" Nov 25 11:55:12 crc kubenswrapper[4706]: E1125 11:55:12.697517 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-keystone:current-podified\\\"\"" pod="openstack/keystone-db-sync-r89ww" podUID="3ec71b1d-86a6-4028-959d-6097b0bc6ed2" Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.715190 4706 scope.go:117] "RemoveContainer" containerID="83dad321de8f13a6f3ba95b0c99abee0113e3a4da07314955a6416398af6f575" Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.749484 4706 scope.go:117] "RemoveContainer" containerID="2e6258e8f7c46131b8a759c0c9b3f24bd923e82de32b14ee29fc527c4524773f" Nov 25 11:55:12 crc kubenswrapper[4706]: E1125 11:55:12.753607 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e6258e8f7c46131b8a759c0c9b3f24bd923e82de32b14ee29fc527c4524773f\": container with ID starting with 2e6258e8f7c46131b8a759c0c9b3f24bd923e82de32b14ee29fc527c4524773f not found: ID does not exist" containerID="2e6258e8f7c46131b8a759c0c9b3f24bd923e82de32b14ee29fc527c4524773f" Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.753653 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e6258e8f7c46131b8a759c0c9b3f24bd923e82de32b14ee29fc527c4524773f"} err="failed to get container status \"2e6258e8f7c46131b8a759c0c9b3f24bd923e82de32b14ee29fc527c4524773f\": rpc error: code = NotFound desc = could not find container \"2e6258e8f7c46131b8a759c0c9b3f24bd923e82de32b14ee29fc527c4524773f\": container with ID starting with 2e6258e8f7c46131b8a759c0c9b3f24bd923e82de32b14ee29fc527c4524773f not found: ID does not exist" Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.753683 4706 scope.go:117] "RemoveContainer" containerID="83dad321de8f13a6f3ba95b0c99abee0113e3a4da07314955a6416398af6f575" Nov 25 11:55:12 crc kubenswrapper[4706]: E1125 11:55:12.754212 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83dad321de8f13a6f3ba95b0c99abee0113e3a4da07314955a6416398af6f575\": container with ID starting with 83dad321de8f13a6f3ba95b0c99abee0113e3a4da07314955a6416398af6f575 not found: ID does not exist" containerID="83dad321de8f13a6f3ba95b0c99abee0113e3a4da07314955a6416398af6f575" Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.754237 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83dad321de8f13a6f3ba95b0c99abee0113e3a4da07314955a6416398af6f575"} err="failed to get container status \"83dad321de8f13a6f3ba95b0c99abee0113e3a4da07314955a6416398af6f575\": rpc error: code = NotFound desc = could not find container \"83dad321de8f13a6f3ba95b0c99abee0113e3a4da07314955a6416398af6f575\": container with ID starting with 83dad321de8f13a6f3ba95b0c99abee0113e3a4da07314955a6416398af6f575 not found: ID does not exist" Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.761906 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-vjh52"] Nov 25 11:55:12 crc kubenswrapper[4706]: I1125 11:55:12.766100 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-vjh52"] Nov 25 11:55:13 crc kubenswrapper[4706]: I1125 11:55:13.935204 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="679831d3-04d7-4b95-8690-837698ce07f3" path="/var/lib/kubelet/pods/679831d3-04d7-4b95-8690-837698ce07f3/volumes" Nov 25 11:55:29 crc kubenswrapper[4706]: I1125 11:55:29.858171 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-r89ww" event={"ID":"3ec71b1d-86a6-4028-959d-6097b0bc6ed2","Type":"ContainerStarted","Data":"b8cd4f92181148c7007b306dbbc97580d58c985b6efadc9a9ba7e404965311ab"} Nov 25 11:55:31 crc kubenswrapper[4706]: I1125 11:55:31.125190 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 11:55:31 crc kubenswrapper[4706]: I1125 11:55:31.125277 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 11:55:32 crc kubenswrapper[4706]: I1125 11:55:32.887840 4706 generic.go:334] "Generic (PLEG): container finished" podID="3ec71b1d-86a6-4028-959d-6097b0bc6ed2" containerID="b8cd4f92181148c7007b306dbbc97580d58c985b6efadc9a9ba7e404965311ab" exitCode=0 Nov 25 11:55:32 crc kubenswrapper[4706]: I1125 11:55:32.887932 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-r89ww" event={"ID":"3ec71b1d-86a6-4028-959d-6097b0bc6ed2","Type":"ContainerDied","Data":"b8cd4f92181148c7007b306dbbc97580d58c985b6efadc9a9ba7e404965311ab"} Nov 25 11:55:34 crc kubenswrapper[4706]: I1125 11:55:34.217599 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-r89ww" Nov 25 11:55:34 crc kubenswrapper[4706]: I1125 11:55:34.354389 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ec71b1d-86a6-4028-959d-6097b0bc6ed2-combined-ca-bundle\") pod \"3ec71b1d-86a6-4028-959d-6097b0bc6ed2\" (UID: \"3ec71b1d-86a6-4028-959d-6097b0bc6ed2\") " Nov 25 11:55:34 crc kubenswrapper[4706]: I1125 11:55:34.354527 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ec71b1d-86a6-4028-959d-6097b0bc6ed2-config-data\") pod \"3ec71b1d-86a6-4028-959d-6097b0bc6ed2\" (UID: \"3ec71b1d-86a6-4028-959d-6097b0bc6ed2\") " Nov 25 11:55:34 crc kubenswrapper[4706]: I1125 11:55:34.354676 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcztr\" (UniqueName: \"kubernetes.io/projected/3ec71b1d-86a6-4028-959d-6097b0bc6ed2-kube-api-access-tcztr\") pod \"3ec71b1d-86a6-4028-959d-6097b0bc6ed2\" (UID: \"3ec71b1d-86a6-4028-959d-6097b0bc6ed2\") " Nov 25 11:55:34 crc kubenswrapper[4706]: I1125 11:55:34.364426 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ec71b1d-86a6-4028-959d-6097b0bc6ed2-kube-api-access-tcztr" (OuterVolumeSpecName: "kube-api-access-tcztr") pod "3ec71b1d-86a6-4028-959d-6097b0bc6ed2" (UID: "3ec71b1d-86a6-4028-959d-6097b0bc6ed2"). InnerVolumeSpecName "kube-api-access-tcztr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:55:34 crc kubenswrapper[4706]: I1125 11:55:34.385526 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ec71b1d-86a6-4028-959d-6097b0bc6ed2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ec71b1d-86a6-4028-959d-6097b0bc6ed2" (UID: "3ec71b1d-86a6-4028-959d-6097b0bc6ed2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:55:34 crc kubenswrapper[4706]: I1125 11:55:34.409280 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ec71b1d-86a6-4028-959d-6097b0bc6ed2-config-data" (OuterVolumeSpecName: "config-data") pod "3ec71b1d-86a6-4028-959d-6097b0bc6ed2" (UID: "3ec71b1d-86a6-4028-959d-6097b0bc6ed2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:55:34 crc kubenswrapper[4706]: I1125 11:55:34.457197 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ec71b1d-86a6-4028-959d-6097b0bc6ed2-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:34 crc kubenswrapper[4706]: I1125 11:55:34.457256 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcztr\" (UniqueName: \"kubernetes.io/projected/3ec71b1d-86a6-4028-959d-6097b0bc6ed2-kube-api-access-tcztr\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:34 crc kubenswrapper[4706]: I1125 11:55:34.457270 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ec71b1d-86a6-4028-959d-6097b0bc6ed2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:34 crc kubenswrapper[4706]: I1125 11:55:34.908606 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-r89ww" event={"ID":"3ec71b1d-86a6-4028-959d-6097b0bc6ed2","Type":"ContainerDied","Data":"80de91d696807acd6d136d26a62cf4ec9aee8e8ac933297ee5fe2efef9f01369"} Nov 25 11:55:34 crc kubenswrapper[4706]: I1125 11:55:34.908643 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80de91d696807acd6d136d26a62cf4ec9aee8e8ac933297ee5fe2efef9f01369" Nov 25 11:55:34 crc kubenswrapper[4706]: I1125 11:55:34.908659 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-r89ww" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.175871 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-m2vpm"] Nov 25 11:55:35 crc kubenswrapper[4706]: E1125 11:55:35.176332 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="679831d3-04d7-4b95-8690-837698ce07f3" containerName="dnsmasq-dns" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.176354 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="679831d3-04d7-4b95-8690-837698ce07f3" containerName="dnsmasq-dns" Nov 25 11:55:35 crc kubenswrapper[4706]: E1125 11:55:35.176372 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c2d1155-3724-4c94-a5fb-fcf88b53064e" containerName="mariadb-database-create" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.176383 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c2d1155-3724-4c94-a5fb-fcf88b53064e" containerName="mariadb-database-create" Nov 25 11:55:35 crc kubenswrapper[4706]: E1125 11:55:35.176398 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7857166d-6bfe-4740-a310-ce20dc486ab2" containerName="init" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.176406 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="7857166d-6bfe-4740-a310-ce20dc486ab2" containerName="init" Nov 25 11:55:35 crc kubenswrapper[4706]: E1125 11:55:35.176423 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="562f2b9a-0768-4613-9711-8df28886eb32" containerName="mariadb-database-create" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.176431 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="562f2b9a-0768-4613-9711-8df28886eb32" containerName="mariadb-database-create" Nov 25 11:55:35 crc kubenswrapper[4706]: E1125 11:55:35.176451 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="001d7afd-ffff-43e2-8463-3ebe29200b80" containerName="mariadb-account-create" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.176460 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="001d7afd-ffff-43e2-8463-3ebe29200b80" containerName="mariadb-account-create" Nov 25 11:55:35 crc kubenswrapper[4706]: E1125 11:55:35.176475 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="054fda50-c263-45c4-9bde-2fc9d81c57b1" containerName="mariadb-database-create" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.176482 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="054fda50-c263-45c4-9bde-2fc9d81c57b1" containerName="mariadb-database-create" Nov 25 11:55:35 crc kubenswrapper[4706]: E1125 11:55:35.176500 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7857166d-6bfe-4740-a310-ce20dc486ab2" containerName="dnsmasq-dns" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.176507 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="7857166d-6bfe-4740-a310-ce20dc486ab2" containerName="dnsmasq-dns" Nov 25 11:55:35 crc kubenswrapper[4706]: E1125 11:55:35.176515 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3b54223-dba3-409f-a6dc-fc371e46ab31" containerName="mariadb-account-create" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.176523 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3b54223-dba3-409f-a6dc-fc371e46ab31" containerName="mariadb-account-create" Nov 25 11:55:35 crc kubenswrapper[4706]: E1125 11:55:35.176531 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2048b4c8-b4e2-4961-992e-4ab7104ca1d3" containerName="mariadb-account-create" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.176538 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="2048b4c8-b4e2-4961-992e-4ab7104ca1d3" containerName="mariadb-account-create" Nov 25 11:55:35 crc kubenswrapper[4706]: E1125 11:55:35.176546 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="679831d3-04d7-4b95-8690-837698ce07f3" containerName="init" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.176552 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="679831d3-04d7-4b95-8690-837698ce07f3" containerName="init" Nov 25 11:55:35 crc kubenswrapper[4706]: E1125 11:55:35.176565 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ec71b1d-86a6-4028-959d-6097b0bc6ed2" containerName="keystone-db-sync" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.176573 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ec71b1d-86a6-4028-959d-6097b0bc6ed2" containerName="keystone-db-sync" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.176775 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="001d7afd-ffff-43e2-8463-3ebe29200b80" containerName="mariadb-account-create" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.176795 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="679831d3-04d7-4b95-8690-837698ce07f3" containerName="dnsmasq-dns" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.176808 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="054fda50-c263-45c4-9bde-2fc9d81c57b1" containerName="mariadb-database-create" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.176819 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3b54223-dba3-409f-a6dc-fc371e46ab31" containerName="mariadb-account-create" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.176828 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c2d1155-3724-4c94-a5fb-fcf88b53064e" containerName="mariadb-database-create" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.176841 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="7857166d-6bfe-4740-a310-ce20dc486ab2" containerName="dnsmasq-dns" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.176852 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="2048b4c8-b4e2-4961-992e-4ab7104ca1d3" containerName="mariadb-account-create" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.176861 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="562f2b9a-0768-4613-9711-8df28886eb32" containerName="mariadb-database-create" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.176870 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ec71b1d-86a6-4028-959d-6097b0bc6ed2" containerName="keystone-db-sync" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.182589 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-m2vpm" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.188981 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-lslv5"] Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.190677 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lslv5" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.192650 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.193020 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.193412 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.193601 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.193668 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-p74gc" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.195756 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-m2vpm"] Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.246546 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lslv5"] Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.373847 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-78549bf5d5-rtlzb"] Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.375611 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb5e4015-f047-4386-b88d-b7b0c2a0878b-config-data\") pod \"keystone-bootstrap-lslv5\" (UID: \"fb5e4015-f047-4386-b88d-b7b0c2a0878b\") " pod="openstack/keystone-bootstrap-lslv5" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.375804 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c5619c3-04a0-486b-9c75-201492f3a322-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-m2vpm\" (UID: \"3c5619c3-04a0-486b-9c75-201492f3a322\") " pod="openstack/dnsmasq-dns-847c4cc679-m2vpm" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.375905 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45q5t\" (UniqueName: \"kubernetes.io/projected/fb5e4015-f047-4386-b88d-b7b0c2a0878b-kube-api-access-45q5t\") pod \"keystone-bootstrap-lslv5\" (UID: \"fb5e4015-f047-4386-b88d-b7b0c2a0878b\") " pod="openstack/keystone-bootstrap-lslv5" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.376018 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb5e4015-f047-4386-b88d-b7b0c2a0878b-combined-ca-bundle\") pod \"keystone-bootstrap-lslv5\" (UID: \"fb5e4015-f047-4386-b88d-b7b0c2a0878b\") " pod="openstack/keystone-bootstrap-lslv5" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.376142 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb5e4015-f047-4386-b88d-b7b0c2a0878b-scripts\") pod \"keystone-bootstrap-lslv5\" (UID: \"fb5e4015-f047-4386-b88d-b7b0c2a0878b\") " pod="openstack/keystone-bootstrap-lslv5" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.376226 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c5619c3-04a0-486b-9c75-201492f3a322-dns-svc\") pod \"dnsmasq-dns-847c4cc679-m2vpm\" (UID: \"3c5619c3-04a0-486b-9c75-201492f3a322\") " pod="openstack/dnsmasq-dns-847c4cc679-m2vpm" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.376326 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c5619c3-04a0-486b-9c75-201492f3a322-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-m2vpm\" (UID: \"3c5619c3-04a0-486b-9c75-201492f3a322\") " pod="openstack/dnsmasq-dns-847c4cc679-m2vpm" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.376470 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c5619c3-04a0-486b-9c75-201492f3a322-config\") pod \"dnsmasq-dns-847c4cc679-m2vpm\" (UID: \"3c5619c3-04a0-486b-9c75-201492f3a322\") " pod="openstack/dnsmasq-dns-847c4cc679-m2vpm" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.376581 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9864b\" (UniqueName: \"kubernetes.io/projected/3c5619c3-04a0-486b-9c75-201492f3a322-kube-api-access-9864b\") pod \"dnsmasq-dns-847c4cc679-m2vpm\" (UID: \"3c5619c3-04a0-486b-9c75-201492f3a322\") " pod="openstack/dnsmasq-dns-847c4cc679-m2vpm" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.376665 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fb5e4015-f047-4386-b88d-b7b0c2a0878b-fernet-keys\") pod \"keystone-bootstrap-lslv5\" (UID: \"fb5e4015-f047-4386-b88d-b7b0c2a0878b\") " pod="openstack/keystone-bootstrap-lslv5" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.376745 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fb5e4015-f047-4386-b88d-b7b0c2a0878b-credential-keys\") pod \"keystone-bootstrap-lslv5\" (UID: \"fb5e4015-f047-4386-b88d-b7b0c2a0878b\") " pod="openstack/keystone-bootstrap-lslv5" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.376893 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c5619c3-04a0-486b-9c75-201492f3a322-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-m2vpm\" (UID: \"3c5619c3-04a0-486b-9c75-201492f3a322\") " pod="openstack/dnsmasq-dns-847c4cc679-m2vpm" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.378198 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78549bf5d5-rtlzb" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.381596 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.383118 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.383352 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.383607 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-hcfgv" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.422776 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-78549bf5d5-rtlzb"] Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.477989 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cba2657d-39a9-4556-abec-412b63df6c94-scripts\") pod \"horizon-78549bf5d5-rtlzb\" (UID: \"cba2657d-39a9-4556-abec-412b63df6c94\") " pod="openstack/horizon-78549bf5d5-rtlzb" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.478037 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c5619c3-04a0-486b-9c75-201492f3a322-config\") pod \"dnsmasq-dns-847c4cc679-m2vpm\" (UID: \"3c5619c3-04a0-486b-9c75-201492f3a322\") " pod="openstack/dnsmasq-dns-847c4cc679-m2vpm" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.478064 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cba2657d-39a9-4556-abec-412b63df6c94-config-data\") pod \"horizon-78549bf5d5-rtlzb\" (UID: \"cba2657d-39a9-4556-abec-412b63df6c94\") " pod="openstack/horizon-78549bf5d5-rtlzb" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.478096 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p75q7\" (UniqueName: \"kubernetes.io/projected/cba2657d-39a9-4556-abec-412b63df6c94-kube-api-access-p75q7\") pod \"horizon-78549bf5d5-rtlzb\" (UID: \"cba2657d-39a9-4556-abec-412b63df6c94\") " pod="openstack/horizon-78549bf5d5-rtlzb" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.478120 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9864b\" (UniqueName: \"kubernetes.io/projected/3c5619c3-04a0-486b-9c75-201492f3a322-kube-api-access-9864b\") pod \"dnsmasq-dns-847c4cc679-m2vpm\" (UID: \"3c5619c3-04a0-486b-9c75-201492f3a322\") " pod="openstack/dnsmasq-dns-847c4cc679-m2vpm" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.478139 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fb5e4015-f047-4386-b88d-b7b0c2a0878b-fernet-keys\") pod \"keystone-bootstrap-lslv5\" (UID: \"fb5e4015-f047-4386-b88d-b7b0c2a0878b\") " pod="openstack/keystone-bootstrap-lslv5" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.478165 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fb5e4015-f047-4386-b88d-b7b0c2a0878b-credential-keys\") pod \"keystone-bootstrap-lslv5\" (UID: \"fb5e4015-f047-4386-b88d-b7b0c2a0878b\") " pod="openstack/keystone-bootstrap-lslv5" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.478186 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c5619c3-04a0-486b-9c75-201492f3a322-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-m2vpm\" (UID: \"3c5619c3-04a0-486b-9c75-201492f3a322\") " pod="openstack/dnsmasq-dns-847c4cc679-m2vpm" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.478220 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cba2657d-39a9-4556-abec-412b63df6c94-horizon-secret-key\") pod \"horizon-78549bf5d5-rtlzb\" (UID: \"cba2657d-39a9-4556-abec-412b63df6c94\") " pod="openstack/horizon-78549bf5d5-rtlzb" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.478260 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb5e4015-f047-4386-b88d-b7b0c2a0878b-config-data\") pod \"keystone-bootstrap-lslv5\" (UID: \"fb5e4015-f047-4386-b88d-b7b0c2a0878b\") " pod="openstack/keystone-bootstrap-lslv5" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.478326 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c5619c3-04a0-486b-9c75-201492f3a322-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-m2vpm\" (UID: \"3c5619c3-04a0-486b-9c75-201492f3a322\") " pod="openstack/dnsmasq-dns-847c4cc679-m2vpm" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.479175 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c5619c3-04a0-486b-9c75-201492f3a322-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-m2vpm\" (UID: \"3c5619c3-04a0-486b-9c75-201492f3a322\") " pod="openstack/dnsmasq-dns-847c4cc679-m2vpm" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.479751 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c5619c3-04a0-486b-9c75-201492f3a322-config\") pod \"dnsmasq-dns-847c4cc679-m2vpm\" (UID: \"3c5619c3-04a0-486b-9c75-201492f3a322\") " pod="openstack/dnsmasq-dns-847c4cc679-m2vpm" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.480509 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45q5t\" (UniqueName: \"kubernetes.io/projected/fb5e4015-f047-4386-b88d-b7b0c2a0878b-kube-api-access-45q5t\") pod \"keystone-bootstrap-lslv5\" (UID: \"fb5e4015-f047-4386-b88d-b7b0c2a0878b\") " pod="openstack/keystone-bootstrap-lslv5" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.480607 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb5e4015-f047-4386-b88d-b7b0c2a0878b-combined-ca-bundle\") pod \"keystone-bootstrap-lslv5\" (UID: \"fb5e4015-f047-4386-b88d-b7b0c2a0878b\") " pod="openstack/keystone-bootstrap-lslv5" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.480646 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb5e4015-f047-4386-b88d-b7b0c2a0878b-scripts\") pod \"keystone-bootstrap-lslv5\" (UID: \"fb5e4015-f047-4386-b88d-b7b0c2a0878b\") " pod="openstack/keystone-bootstrap-lslv5" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.480680 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cba2657d-39a9-4556-abec-412b63df6c94-logs\") pod \"horizon-78549bf5d5-rtlzb\" (UID: \"cba2657d-39a9-4556-abec-412b63df6c94\") " pod="openstack/horizon-78549bf5d5-rtlzb" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.480707 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c5619c3-04a0-486b-9c75-201492f3a322-dns-svc\") pod \"dnsmasq-dns-847c4cc679-m2vpm\" (UID: \"3c5619c3-04a0-486b-9c75-201492f3a322\") " pod="openstack/dnsmasq-dns-847c4cc679-m2vpm" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.480742 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c5619c3-04a0-486b-9c75-201492f3a322-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-m2vpm\" (UID: \"3c5619c3-04a0-486b-9c75-201492f3a322\") " pod="openstack/dnsmasq-dns-847c4cc679-m2vpm" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.481389 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c5619c3-04a0-486b-9c75-201492f3a322-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-m2vpm\" (UID: \"3c5619c3-04a0-486b-9c75-201492f3a322\") " pod="openstack/dnsmasq-dns-847c4cc679-m2vpm" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.481524 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c5619c3-04a0-486b-9c75-201492f3a322-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-m2vpm\" (UID: \"3c5619c3-04a0-486b-9c75-201492f3a322\") " pod="openstack/dnsmasq-dns-847c4cc679-m2vpm" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.483333 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c5619c3-04a0-486b-9c75-201492f3a322-dns-svc\") pod \"dnsmasq-dns-847c4cc679-m2vpm\" (UID: \"3c5619c3-04a0-486b-9c75-201492f3a322\") " pod="openstack/dnsmasq-dns-847c4cc679-m2vpm" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.505226 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb5e4015-f047-4386-b88d-b7b0c2a0878b-scripts\") pod \"keystone-bootstrap-lslv5\" (UID: \"fb5e4015-f047-4386-b88d-b7b0c2a0878b\") " pod="openstack/keystone-bootstrap-lslv5" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.527191 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb5e4015-f047-4386-b88d-b7b0c2a0878b-combined-ca-bundle\") pod \"keystone-bootstrap-lslv5\" (UID: \"fb5e4015-f047-4386-b88d-b7b0c2a0878b\") " pod="openstack/keystone-bootstrap-lslv5" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.527714 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fb5e4015-f047-4386-b88d-b7b0c2a0878b-fernet-keys\") pod \"keystone-bootstrap-lslv5\" (UID: \"fb5e4015-f047-4386-b88d-b7b0c2a0878b\") " pod="openstack/keystone-bootstrap-lslv5" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.528291 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb5e4015-f047-4386-b88d-b7b0c2a0878b-config-data\") pod \"keystone-bootstrap-lslv5\" (UID: \"fb5e4015-f047-4386-b88d-b7b0c2a0878b\") " pod="openstack/keystone-bootstrap-lslv5" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.528650 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fb5e4015-f047-4386-b88d-b7b0c2a0878b-credential-keys\") pod \"keystone-bootstrap-lslv5\" (UID: \"fb5e4015-f047-4386-b88d-b7b0c2a0878b\") " pod="openstack/keystone-bootstrap-lslv5" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.535366 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-fd7sf"] Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.591766 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-hdbbw"] Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.596579 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-fd7sf" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.600457 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9864b\" (UniqueName: \"kubernetes.io/projected/3c5619c3-04a0-486b-9c75-201492f3a322-kube-api-access-9864b\") pod \"dnsmasq-dns-847c4cc679-m2vpm\" (UID: \"3c5619c3-04a0-486b-9c75-201492f3a322\") " pod="openstack/dnsmasq-dns-847c4cc679-m2vpm" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.603341 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.606650 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.615659 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-n4npr" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.616230 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hdbbw" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.667813 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cba2657d-39a9-4556-abec-412b63df6c94-config-data\") pod \"horizon-78549bf5d5-rtlzb\" (UID: \"cba2657d-39a9-4556-abec-412b63df6c94\") " pod="openstack/horizon-78549bf5d5-rtlzb" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.668037 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p75q7\" (UniqueName: \"kubernetes.io/projected/cba2657d-39a9-4556-abec-412b63df6c94-kube-api-access-p75q7\") pod \"horizon-78549bf5d5-rtlzb\" (UID: \"cba2657d-39a9-4556-abec-412b63df6c94\") " pod="openstack/horizon-78549bf5d5-rtlzb" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.668223 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cba2657d-39a9-4556-abec-412b63df6c94-horizon-secret-key\") pod \"horizon-78549bf5d5-rtlzb\" (UID: \"cba2657d-39a9-4556-abec-412b63df6c94\") " pod="openstack/horizon-78549bf5d5-rtlzb" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.675179 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cba2657d-39a9-4556-abec-412b63df6c94-horizon-secret-key\") pod \"horizon-78549bf5d5-rtlzb\" (UID: \"cba2657d-39a9-4556-abec-412b63df6c94\") " pod="openstack/horizon-78549bf5d5-rtlzb" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.675279 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45q5t\" (UniqueName: \"kubernetes.io/projected/fb5e4015-f047-4386-b88d-b7b0c2a0878b-kube-api-access-45q5t\") pod \"keystone-bootstrap-lslv5\" (UID: \"fb5e4015-f047-4386-b88d-b7b0c2a0878b\") " pod="openstack/keystone-bootstrap-lslv5" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.676487 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.676637 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-5bbq6" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.676756 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.678978 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cba2657d-39a9-4556-abec-412b63df6c94-config-data\") pod \"horizon-78549bf5d5-rtlzb\" (UID: \"cba2657d-39a9-4556-abec-412b63df6c94\") " pod="openstack/horizon-78549bf5d5-rtlzb" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.679216 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cba2657d-39a9-4556-abec-412b63df6c94-logs\") pod \"horizon-78549bf5d5-rtlzb\" (UID: \"cba2657d-39a9-4556-abec-412b63df6c94\") " pod="openstack/horizon-78549bf5d5-rtlzb" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.679347 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cba2657d-39a9-4556-abec-412b63df6c94-scripts\") pod \"horizon-78549bf5d5-rtlzb\" (UID: \"cba2657d-39a9-4556-abec-412b63df6c94\") " pod="openstack/horizon-78549bf5d5-rtlzb" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.679755 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cba2657d-39a9-4556-abec-412b63df6c94-logs\") pod \"horizon-78549bf5d5-rtlzb\" (UID: \"cba2657d-39a9-4556-abec-412b63df6c94\") " pod="openstack/horizon-78549bf5d5-rtlzb" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.680085 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cba2657d-39a9-4556-abec-412b63df6c94-scripts\") pod \"horizon-78549bf5d5-rtlzb\" (UID: \"cba2657d-39a9-4556-abec-412b63df6c94\") " pod="openstack/horizon-78549bf5d5-rtlzb" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.692882 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-fd7sf"] Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.701581 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.728494 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.733816 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.734075 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.734187 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-hdbbw"] Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.737414 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p75q7\" (UniqueName: \"kubernetes.io/projected/cba2657d-39a9-4556-abec-412b63df6c94-kube-api-access-p75q7\") pod \"horizon-78549bf5d5-rtlzb\" (UID: \"cba2657d-39a9-4556-abec-412b63df6c94\") " pod="openstack/horizon-78549bf5d5-rtlzb" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.747039 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.763707 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6899b4bd6f-vwrfh"] Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.765134 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6899b4bd6f-vwrfh" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.778362 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6899b4bd6f-vwrfh"] Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.780363 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf-config\") pod \"neutron-db-sync-hdbbw\" (UID: \"27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf\") " pod="openstack/neutron-db-sync-hdbbw" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.780437 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dkcz\" (UniqueName: \"kubernetes.io/projected/424f303d-41b7-4fd6-be4a-017148ed95da-kube-api-access-2dkcz\") pod \"cinder-db-sync-fd7sf\" (UID: \"424f303d-41b7-4fd6-be4a-017148ed95da\") " pod="openstack/cinder-db-sync-fd7sf" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.780467 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/424f303d-41b7-4fd6-be4a-017148ed95da-etc-machine-id\") pod \"cinder-db-sync-fd7sf\" (UID: \"424f303d-41b7-4fd6-be4a-017148ed95da\") " pod="openstack/cinder-db-sync-fd7sf" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.780504 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/424f303d-41b7-4fd6-be4a-017148ed95da-config-data\") pod \"cinder-db-sync-fd7sf\" (UID: \"424f303d-41b7-4fd6-be4a-017148ed95da\") " pod="openstack/cinder-db-sync-fd7sf" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.780530 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/424f303d-41b7-4fd6-be4a-017148ed95da-combined-ca-bundle\") pod \"cinder-db-sync-fd7sf\" (UID: \"424f303d-41b7-4fd6-be4a-017148ed95da\") " pod="openstack/cinder-db-sync-fd7sf" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.780590 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf-combined-ca-bundle\") pod \"neutron-db-sync-hdbbw\" (UID: \"27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf\") " pod="openstack/neutron-db-sync-hdbbw" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.780633 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/424f303d-41b7-4fd6-be4a-017148ed95da-scripts\") pod \"cinder-db-sync-fd7sf\" (UID: \"424f303d-41b7-4fd6-be4a-017148ed95da\") " pod="openstack/cinder-db-sync-fd7sf" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.780668 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6brcv\" (UniqueName: \"kubernetes.io/projected/27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf-kube-api-access-6brcv\") pod \"neutron-db-sync-hdbbw\" (UID: \"27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf\") " pod="openstack/neutron-db-sync-hdbbw" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.780766 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/424f303d-41b7-4fd6-be4a-017148ed95da-db-sync-config-data\") pod \"cinder-db-sync-fd7sf\" (UID: \"424f303d-41b7-4fd6-be4a-017148ed95da\") " pod="openstack/cinder-db-sync-fd7sf" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.802600 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.804820 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.805547 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-m2vpm" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.821940 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-ntkr9"] Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.823389 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ntkr9" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.828361 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.828633 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.828676 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.828873 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.829004 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-lblxg" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.829202 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-wfhgp" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.829398 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.829528 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.829774 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lslv5" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.855826 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-m2vpm"] Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.884583 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/424f303d-41b7-4fd6-be4a-017148ed95da-config-data\") pod \"cinder-db-sync-fd7sf\" (UID: \"424f303d-41b7-4fd6-be4a-017148ed95da\") " pod="openstack/cinder-db-sync-fd7sf" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.884624 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/424f303d-41b7-4fd6-be4a-017148ed95da-combined-ca-bundle\") pod \"cinder-db-sync-fd7sf\" (UID: \"424f303d-41b7-4fd6-be4a-017148ed95da\") " pod="openstack/cinder-db-sync-fd7sf" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.884675 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c785321d-b637-4f3a-9e69-bc237eb1e9c2-scripts\") pod \"horizon-6899b4bd6f-vwrfh\" (UID: \"c785321d-b637-4f3a-9e69-bc237eb1e9c2\") " pod="openstack/horizon-6899b4bd6f-vwrfh" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.884713 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf-combined-ca-bundle\") pod \"neutron-db-sync-hdbbw\" (UID: \"27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf\") " pod="openstack/neutron-db-sync-hdbbw" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.884735 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/424f303d-41b7-4fd6-be4a-017148ed95da-scripts\") pod \"cinder-db-sync-fd7sf\" (UID: \"424f303d-41b7-4fd6-be4a-017148ed95da\") " pod="openstack/cinder-db-sync-fd7sf" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.884767 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6brcv\" (UniqueName: \"kubernetes.io/projected/27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf-kube-api-access-6brcv\") pod \"neutron-db-sync-hdbbw\" (UID: \"27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf\") " pod="openstack/neutron-db-sync-hdbbw" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.884811 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db4e7aed-28ec-49cd-8f0b-e01df112bf54-log-httpd\") pod \"ceilometer-0\" (UID: \"db4e7aed-28ec-49cd-8f0b-e01df112bf54\") " pod="openstack/ceilometer-0" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.884842 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db4e7aed-28ec-49cd-8f0b-e01df112bf54-config-data\") pod \"ceilometer-0\" (UID: \"db4e7aed-28ec-49cd-8f0b-e01df112bf54\") " pod="openstack/ceilometer-0" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.884865 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c785321d-b637-4f3a-9e69-bc237eb1e9c2-logs\") pod \"horizon-6899b4bd6f-vwrfh\" (UID: \"c785321d-b637-4f3a-9e69-bc237eb1e9c2\") " pod="openstack/horizon-6899b4bd6f-vwrfh" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.884888 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db4e7aed-28ec-49cd-8f0b-e01df112bf54-scripts\") pod \"ceilometer-0\" (UID: \"db4e7aed-28ec-49cd-8f0b-e01df112bf54\") " pod="openstack/ceilometer-0" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.884918 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e7aed-28ec-49cd-8f0b-e01df112bf54-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"db4e7aed-28ec-49cd-8f0b-e01df112bf54\") " pod="openstack/ceilometer-0" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.884951 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/db4e7aed-28ec-49cd-8f0b-e01df112bf54-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"db4e7aed-28ec-49cd-8f0b-e01df112bf54\") " pod="openstack/ceilometer-0" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.884985 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/424f303d-41b7-4fd6-be4a-017148ed95da-db-sync-config-data\") pod \"cinder-db-sync-fd7sf\" (UID: \"424f303d-41b7-4fd6-be4a-017148ed95da\") " pod="openstack/cinder-db-sync-fd7sf" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.885033 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf-config\") pod \"neutron-db-sync-hdbbw\" (UID: \"27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf\") " pod="openstack/neutron-db-sync-hdbbw" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.885065 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c785321d-b637-4f3a-9e69-bc237eb1e9c2-config-data\") pod \"horizon-6899b4bd6f-vwrfh\" (UID: \"c785321d-b637-4f3a-9e69-bc237eb1e9c2\") " pod="openstack/horizon-6899b4bd6f-vwrfh" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.885093 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn5w7\" (UniqueName: \"kubernetes.io/projected/c785321d-b637-4f3a-9e69-bc237eb1e9c2-kube-api-access-bn5w7\") pod \"horizon-6899b4bd6f-vwrfh\" (UID: \"c785321d-b637-4f3a-9e69-bc237eb1e9c2\") " pod="openstack/horizon-6899b4bd6f-vwrfh" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.885123 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dkcz\" (UniqueName: \"kubernetes.io/projected/424f303d-41b7-4fd6-be4a-017148ed95da-kube-api-access-2dkcz\") pod \"cinder-db-sync-fd7sf\" (UID: \"424f303d-41b7-4fd6-be4a-017148ed95da\") " pod="openstack/cinder-db-sync-fd7sf" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.885147 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/424f303d-41b7-4fd6-be4a-017148ed95da-etc-machine-id\") pod \"cinder-db-sync-fd7sf\" (UID: \"424f303d-41b7-4fd6-be4a-017148ed95da\") " pod="openstack/cinder-db-sync-fd7sf" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.885169 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsmhl\" (UniqueName: \"kubernetes.io/projected/db4e7aed-28ec-49cd-8f0b-e01df112bf54-kube-api-access-fsmhl\") pod \"ceilometer-0\" (UID: \"db4e7aed-28ec-49cd-8f0b-e01df112bf54\") " pod="openstack/ceilometer-0" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.885194 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db4e7aed-28ec-49cd-8f0b-e01df112bf54-run-httpd\") pod \"ceilometer-0\" (UID: \"db4e7aed-28ec-49cd-8f0b-e01df112bf54\") " pod="openstack/ceilometer-0" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.885553 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c785321d-b637-4f3a-9e69-bc237eb1e9c2-horizon-secret-key\") pod \"horizon-6899b4bd6f-vwrfh\" (UID: \"c785321d-b637-4f3a-9e69-bc237eb1e9c2\") " pod="openstack/horizon-6899b4bd6f-vwrfh" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.886354 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/424f303d-41b7-4fd6-be4a-017148ed95da-etc-machine-id\") pod \"cinder-db-sync-fd7sf\" (UID: \"424f303d-41b7-4fd6-be4a-017148ed95da\") " pod="openstack/cinder-db-sync-fd7sf" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.900800 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-ntkr9"] Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.901621 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf-combined-ca-bundle\") pod \"neutron-db-sync-hdbbw\" (UID: \"27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf\") " pod="openstack/neutron-db-sync-hdbbw" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.901803 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/424f303d-41b7-4fd6-be4a-017148ed95da-db-sync-config-data\") pod \"cinder-db-sync-fd7sf\" (UID: \"424f303d-41b7-4fd6-be4a-017148ed95da\") " pod="openstack/cinder-db-sync-fd7sf" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.902420 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/424f303d-41b7-4fd6-be4a-017148ed95da-config-data\") pod \"cinder-db-sync-fd7sf\" (UID: \"424f303d-41b7-4fd6-be4a-017148ed95da\") " pod="openstack/cinder-db-sync-fd7sf" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.902696 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/424f303d-41b7-4fd6-be4a-017148ed95da-combined-ca-bundle\") pod \"cinder-db-sync-fd7sf\" (UID: \"424f303d-41b7-4fd6-be4a-017148ed95da\") " pod="openstack/cinder-db-sync-fd7sf" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.903985 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf-config\") pod \"neutron-db-sync-hdbbw\" (UID: \"27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf\") " pod="openstack/neutron-db-sync-hdbbw" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.904129 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/424f303d-41b7-4fd6-be4a-017148ed95da-scripts\") pod \"cinder-db-sync-fd7sf\" (UID: \"424f303d-41b7-4fd6-be4a-017148ed95da\") " pod="openstack/cinder-db-sync-fd7sf" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.906511 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6brcv\" (UniqueName: \"kubernetes.io/projected/27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf-kube-api-access-6brcv\") pod \"neutron-db-sync-hdbbw\" (UID: \"27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf\") " pod="openstack/neutron-db-sync-hdbbw" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.907175 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dkcz\" (UniqueName: \"kubernetes.io/projected/424f303d-41b7-4fd6-be4a-017148ed95da-kube-api-access-2dkcz\") pod \"cinder-db-sync-fd7sf\" (UID: \"424f303d-41b7-4fd6-be4a-017148ed95da\") " pod="openstack/cinder-db-sync-fd7sf" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.917206 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-v6lvb"] Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.918711 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-v6lvb" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.930000 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-whr6h" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.930548 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.934796 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-v6lvb"] Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.947442 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-vhqcg"] Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.953223 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-vhqcg" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.966425 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.968085 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.975074 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.975309 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.987378 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fff3e0d5-0608-4e15-9a92-376b6a2b7d17-scripts\") pod \"placement-db-sync-ntkr9\" (UID: \"fff3e0d5-0608-4e15-9a92-376b6a2b7d17\") " pod="openstack/placement-db-sync-ntkr9" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.987420 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c785321d-b637-4f3a-9e69-bc237eb1e9c2-scripts\") pod \"horizon-6899b4bd6f-vwrfh\" (UID: \"c785321d-b637-4f3a-9e69-bc237eb1e9c2\") " pod="openstack/horizon-6899b4bd6f-vwrfh" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.987453 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\") " pod="openstack/glance-default-external-api-0" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.987476 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\") " pod="openstack/glance-default-external-api-0" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.988173 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-config-data\") pod \"glance-default-external-api-0\" (UID: \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\") " pod="openstack/glance-default-external-api-0" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.988226 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-logs\") pod \"glance-default-external-api-0\" (UID: \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\") " pod="openstack/glance-default-external-api-0" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.988263 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db4e7aed-28ec-49cd-8f0b-e01df112bf54-log-httpd\") pod \"ceilometer-0\" (UID: \"db4e7aed-28ec-49cd-8f0b-e01df112bf54\") " pod="openstack/ceilometer-0" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.988293 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\") " pod="openstack/glance-default-external-api-0" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.988337 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db4e7aed-28ec-49cd-8f0b-e01df112bf54-config-data\") pod \"ceilometer-0\" (UID: \"db4e7aed-28ec-49cd-8f0b-e01df112bf54\") " pod="openstack/ceilometer-0" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.988358 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c785321d-b637-4f3a-9e69-bc237eb1e9c2-logs\") pod \"horizon-6899b4bd6f-vwrfh\" (UID: \"c785321d-b637-4f3a-9e69-bc237eb1e9c2\") " pod="openstack/horizon-6899b4bd6f-vwrfh" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.988387 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff3e0d5-0608-4e15-9a92-376b6a2b7d17-combined-ca-bundle\") pod \"placement-db-sync-ntkr9\" (UID: \"fff3e0d5-0608-4e15-9a92-376b6a2b7d17\") " pod="openstack/placement-db-sync-ntkr9" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.988411 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db4e7aed-28ec-49cd-8f0b-e01df112bf54-scripts\") pod \"ceilometer-0\" (UID: \"db4e7aed-28ec-49cd-8f0b-e01df112bf54\") " pod="openstack/ceilometer-0" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.988430 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c25zl\" (UniqueName: \"kubernetes.io/projected/fff3e0d5-0608-4e15-9a92-376b6a2b7d17-kube-api-access-c25zl\") pod \"placement-db-sync-ntkr9\" (UID: \"fff3e0d5-0608-4e15-9a92-376b6a2b7d17\") " pod="openstack/placement-db-sync-ntkr9" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.988450 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e7aed-28ec-49cd-8f0b-e01df112bf54-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"db4e7aed-28ec-49cd-8f0b-e01df112bf54\") " pod="openstack/ceilometer-0" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.988476 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/db4e7aed-28ec-49cd-8f0b-e01df112bf54-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"db4e7aed-28ec-49cd-8f0b-e01df112bf54\") " pod="openstack/ceilometer-0" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.988498 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-scripts\") pod \"glance-default-external-api-0\" (UID: \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\") " pod="openstack/glance-default-external-api-0" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.988537 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c785321d-b637-4f3a-9e69-bc237eb1e9c2-config-data\") pod \"horizon-6899b4bd6f-vwrfh\" (UID: \"c785321d-b637-4f3a-9e69-bc237eb1e9c2\") " pod="openstack/horizon-6899b4bd6f-vwrfh" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.988556 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn5w7\" (UniqueName: \"kubernetes.io/projected/c785321d-b637-4f3a-9e69-bc237eb1e9c2-kube-api-access-bn5w7\") pod \"horizon-6899b4bd6f-vwrfh\" (UID: \"c785321d-b637-4f3a-9e69-bc237eb1e9c2\") " pod="openstack/horizon-6899b4bd6f-vwrfh" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.988576 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fff3e0d5-0608-4e15-9a92-376b6a2b7d17-logs\") pod \"placement-db-sync-ntkr9\" (UID: \"fff3e0d5-0608-4e15-9a92-376b6a2b7d17\") " pod="openstack/placement-db-sync-ntkr9" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.988592 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\") " pod="openstack/glance-default-external-api-0" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.988614 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsmhl\" (UniqueName: \"kubernetes.io/projected/db4e7aed-28ec-49cd-8f0b-e01df112bf54-kube-api-access-fsmhl\") pod \"ceilometer-0\" (UID: \"db4e7aed-28ec-49cd-8f0b-e01df112bf54\") " pod="openstack/ceilometer-0" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.988630 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fff3e0d5-0608-4e15-9a92-376b6a2b7d17-config-data\") pod \"placement-db-sync-ntkr9\" (UID: \"fff3e0d5-0608-4e15-9a92-376b6a2b7d17\") " pod="openstack/placement-db-sync-ntkr9" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.988646 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db4e7aed-28ec-49cd-8f0b-e01df112bf54-run-httpd\") pod \"ceilometer-0\" (UID: \"db4e7aed-28ec-49cd-8f0b-e01df112bf54\") " pod="openstack/ceilometer-0" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.988660 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c785321d-b637-4f3a-9e69-bc237eb1e9c2-horizon-secret-key\") pod \"horizon-6899b4bd6f-vwrfh\" (UID: \"c785321d-b637-4f3a-9e69-bc237eb1e9c2\") " pod="openstack/horizon-6899b4bd6f-vwrfh" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.988678 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2cpz\" (UniqueName: \"kubernetes.io/projected/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-kube-api-access-g2cpz\") pod \"glance-default-external-api-0\" (UID: \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\") " pod="openstack/glance-default-external-api-0" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.990423 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c785321d-b637-4f3a-9e69-bc237eb1e9c2-scripts\") pod \"horizon-6899b4bd6f-vwrfh\" (UID: \"c785321d-b637-4f3a-9e69-bc237eb1e9c2\") " pod="openstack/horizon-6899b4bd6f-vwrfh" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.990929 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db4e7aed-28ec-49cd-8f0b-e01df112bf54-log-httpd\") pod \"ceilometer-0\" (UID: \"db4e7aed-28ec-49cd-8f0b-e01df112bf54\") " pod="openstack/ceilometer-0" Nov 25 11:55:35 crc kubenswrapper[4706]: I1125 11:55:35.999590 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c785321d-b637-4f3a-9e69-bc237eb1e9c2-config-data\") pod \"horizon-6899b4bd6f-vwrfh\" (UID: \"c785321d-b637-4f3a-9e69-bc237eb1e9c2\") " pod="openstack/horizon-6899b4bd6f-vwrfh" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.000159 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c785321d-b637-4f3a-9e69-bc237eb1e9c2-logs\") pod \"horizon-6899b4bd6f-vwrfh\" (UID: \"c785321d-b637-4f3a-9e69-bc237eb1e9c2\") " pod="openstack/horizon-6899b4bd6f-vwrfh" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.000208 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db4e7aed-28ec-49cd-8f0b-e01df112bf54-run-httpd\") pod \"ceilometer-0\" (UID: \"db4e7aed-28ec-49cd-8f0b-e01df112bf54\") " pod="openstack/ceilometer-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.003403 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-vhqcg"] Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.007179 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e7aed-28ec-49cd-8f0b-e01df112bf54-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"db4e7aed-28ec-49cd-8f0b-e01df112bf54\") " pod="openstack/ceilometer-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.008105 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/db4e7aed-28ec-49cd-8f0b-e01df112bf54-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"db4e7aed-28ec-49cd-8f0b-e01df112bf54\") " pod="openstack/ceilometer-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.009733 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db4e7aed-28ec-49cd-8f0b-e01df112bf54-scripts\") pod \"ceilometer-0\" (UID: \"db4e7aed-28ec-49cd-8f0b-e01df112bf54\") " pod="openstack/ceilometer-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.012988 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db4e7aed-28ec-49cd-8f0b-e01df112bf54-config-data\") pod \"ceilometer-0\" (UID: \"db4e7aed-28ec-49cd-8f0b-e01df112bf54\") " pod="openstack/ceilometer-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.015482 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c785321d-b637-4f3a-9e69-bc237eb1e9c2-horizon-secret-key\") pod \"horizon-6899b4bd6f-vwrfh\" (UID: \"c785321d-b637-4f3a-9e69-bc237eb1e9c2\") " pod="openstack/horizon-6899b4bd6f-vwrfh" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.021737 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78549bf5d5-rtlzb" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.033161 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsmhl\" (UniqueName: \"kubernetes.io/projected/db4e7aed-28ec-49cd-8f0b-e01df112bf54-kube-api-access-fsmhl\") pod \"ceilometer-0\" (UID: \"db4e7aed-28ec-49cd-8f0b-e01df112bf54\") " pod="openstack/ceilometer-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.033687 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn5w7\" (UniqueName: \"kubernetes.io/projected/c785321d-b637-4f3a-9e69-bc237eb1e9c2-kube-api-access-bn5w7\") pod \"horizon-6899b4bd6f-vwrfh\" (UID: \"c785321d-b637-4f3a-9e69-bc237eb1e9c2\") " pod="openstack/horizon-6899b4bd6f-vwrfh" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.042113 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.082984 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-fd7sf" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.092016 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6d1720-c37f-4501-bbb1-16f507bc1126-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"dc6d1720-c37f-4501-bbb1-16f507bc1126\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.092067 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6d1720-c37f-4501-bbb1-16f507bc1126-config-data\") pod \"glance-default-internal-api-0\" (UID: \"dc6d1720-c37f-4501-bbb1-16f507bc1126\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.092109 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-logs\") pod \"glance-default-external-api-0\" (UID: \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\") " pod="openstack/glance-default-external-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.092136 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08ef6ec0-ba09-40a2-94d0-a1ddbba8644a-combined-ca-bundle\") pod \"barbican-db-sync-v6lvb\" (UID: \"08ef6ec0-ba09-40a2-94d0-a1ddbba8644a\") " pod="openstack/barbican-db-sync-v6lvb" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.092179 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\") " pod="openstack/glance-default-external-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.092199 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6d1720-c37f-4501-bbb1-16f507bc1126-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"dc6d1720-c37f-4501-bbb1-16f507bc1126\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.092216 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff3e0d5-0608-4e15-9a92-376b6a2b7d17-combined-ca-bundle\") pod \"placement-db-sync-ntkr9\" (UID: \"fff3e0d5-0608-4e15-9a92-376b6a2b7d17\") " pod="openstack/placement-db-sync-ntkr9" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.092240 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c25zl\" (UniqueName: \"kubernetes.io/projected/fff3e0d5-0608-4e15-9a92-376b6a2b7d17-kube-api-access-c25zl\") pod \"placement-db-sync-ntkr9\" (UID: \"fff3e0d5-0608-4e15-9a92-376b6a2b7d17\") " pod="openstack/placement-db-sync-ntkr9" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.092272 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgwhj\" (UniqueName: \"kubernetes.io/projected/3e3d141e-c4bd-479f-998d-a3ecfcf87156-kube-api-access-zgwhj\") pod \"dnsmasq-dns-785d8bcb8c-vhqcg\" (UID: \"3e3d141e-c4bd-479f-998d-a3ecfcf87156\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vhqcg" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.092341 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bczs\" (UniqueName: \"kubernetes.io/projected/dc6d1720-c37f-4501-bbb1-16f507bc1126-kube-api-access-9bczs\") pod \"glance-default-internal-api-0\" (UID: \"dc6d1720-c37f-4501-bbb1-16f507bc1126\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.092369 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-scripts\") pod \"glance-default-external-api-0\" (UID: \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\") " pod="openstack/glance-default-external-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.092414 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e3d141e-c4bd-479f-998d-a3ecfcf87156-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-vhqcg\" (UID: \"3e3d141e-c4bd-479f-998d-a3ecfcf87156\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vhqcg" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.092452 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc6d1720-c37f-4501-bbb1-16f507bc1126-scripts\") pod \"glance-default-internal-api-0\" (UID: \"dc6d1720-c37f-4501-bbb1-16f507bc1126\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.092603 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\") " pod="openstack/glance-default-external-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.092625 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fff3e0d5-0608-4e15-9a92-376b6a2b7d17-logs\") pod \"placement-db-sync-ntkr9\" (UID: \"fff3e0d5-0608-4e15-9a92-376b6a2b7d17\") " pod="openstack/placement-db-sync-ntkr9" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.092645 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc6d1720-c37f-4501-bbb1-16f507bc1126-logs\") pod \"glance-default-internal-api-0\" (UID: \"dc6d1720-c37f-4501-bbb1-16f507bc1126\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.092669 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3e3d141e-c4bd-479f-998d-a3ecfcf87156-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-vhqcg\" (UID: \"3e3d141e-c4bd-479f-998d-a3ecfcf87156\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vhqcg" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.092686 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fff3e0d5-0608-4e15-9a92-376b6a2b7d17-config-data\") pod \"placement-db-sync-ntkr9\" (UID: \"fff3e0d5-0608-4e15-9a92-376b6a2b7d17\") " pod="openstack/placement-db-sync-ntkr9" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.092706 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc6d1720-c37f-4501-bbb1-16f507bc1126-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"dc6d1720-c37f-4501-bbb1-16f507bc1126\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.092728 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2cpz\" (UniqueName: \"kubernetes.io/projected/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-kube-api-access-g2cpz\") pod \"glance-default-external-api-0\" (UID: \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\") " pod="openstack/glance-default-external-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.092749 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"dc6d1720-c37f-4501-bbb1-16f507bc1126\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.092793 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e3d141e-c4bd-479f-998d-a3ecfcf87156-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-vhqcg\" (UID: \"3e3d141e-c4bd-479f-998d-a3ecfcf87156\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vhqcg" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.092811 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fff3e0d5-0608-4e15-9a92-376b6a2b7d17-scripts\") pod \"placement-db-sync-ntkr9\" (UID: \"fff3e0d5-0608-4e15-9a92-376b6a2b7d17\") " pod="openstack/placement-db-sync-ntkr9" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.092843 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/08ef6ec0-ba09-40a2-94d0-a1ddbba8644a-db-sync-config-data\") pod \"barbican-db-sync-v6lvb\" (UID: \"08ef6ec0-ba09-40a2-94d0-a1ddbba8644a\") " pod="openstack/barbican-db-sync-v6lvb" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.092867 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zgll\" (UniqueName: \"kubernetes.io/projected/08ef6ec0-ba09-40a2-94d0-a1ddbba8644a-kube-api-access-7zgll\") pod \"barbican-db-sync-v6lvb\" (UID: \"08ef6ec0-ba09-40a2-94d0-a1ddbba8644a\") " pod="openstack/barbican-db-sync-v6lvb" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.092883 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e3d141e-c4bd-479f-998d-a3ecfcf87156-config\") pod \"dnsmasq-dns-785d8bcb8c-vhqcg\" (UID: \"3e3d141e-c4bd-479f-998d-a3ecfcf87156\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vhqcg" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.092955 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\") " pod="openstack/glance-default-external-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.092978 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e3d141e-c4bd-479f-998d-a3ecfcf87156-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-vhqcg\" (UID: \"3e3d141e-c4bd-479f-998d-a3ecfcf87156\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vhqcg" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.093005 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\") " pod="openstack/glance-default-external-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.093031 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-config-data\") pod \"glance-default-external-api-0\" (UID: \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\") " pod="openstack/glance-default-external-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.094272 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hdbbw" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.097460 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-logs\") pod \"glance-default-external-api-0\" (UID: \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\") " pod="openstack/glance-default-external-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.097700 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\") " pod="openstack/glance-default-external-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.102431 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fff3e0d5-0608-4e15-9a92-376b6a2b7d17-logs\") pod \"placement-db-sync-ntkr9\" (UID: \"fff3e0d5-0608-4e15-9a92-376b6a2b7d17\") " pod="openstack/placement-db-sync-ntkr9" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.104099 4706 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.105131 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\") " pod="openstack/glance-default-external-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.105693 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff3e0d5-0608-4e15-9a92-376b6a2b7d17-combined-ca-bundle\") pod \"placement-db-sync-ntkr9\" (UID: \"fff3e0d5-0608-4e15-9a92-376b6a2b7d17\") " pod="openstack/placement-db-sync-ntkr9" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.105776 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-config-data\") pod \"glance-default-external-api-0\" (UID: \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\") " pod="openstack/glance-default-external-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.113117 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fff3e0d5-0608-4e15-9a92-376b6a2b7d17-config-data\") pod \"placement-db-sync-ntkr9\" (UID: \"fff3e0d5-0608-4e15-9a92-376b6a2b7d17\") " pod="openstack/placement-db-sync-ntkr9" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.117680 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\") " pod="openstack/glance-default-external-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.122428 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-scripts\") pod \"glance-default-external-api-0\" (UID: \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\") " pod="openstack/glance-default-external-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.123205 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fff3e0d5-0608-4e15-9a92-376b6a2b7d17-scripts\") pod \"placement-db-sync-ntkr9\" (UID: \"fff3e0d5-0608-4e15-9a92-376b6a2b7d17\") " pod="openstack/placement-db-sync-ntkr9" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.126013 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c25zl\" (UniqueName: \"kubernetes.io/projected/fff3e0d5-0608-4e15-9a92-376b6a2b7d17-kube-api-access-c25zl\") pod \"placement-db-sync-ntkr9\" (UID: \"fff3e0d5-0608-4e15-9a92-376b6a2b7d17\") " pod="openstack/placement-db-sync-ntkr9" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.126165 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2cpz\" (UniqueName: \"kubernetes.io/projected/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-kube-api-access-g2cpz\") pod \"glance-default-external-api-0\" (UID: \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\") " pod="openstack/glance-default-external-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.130867 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.166947 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\") " pod="openstack/glance-default-external-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.172089 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6899b4bd6f-vwrfh" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.172229 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.193088 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ntkr9" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.195592 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bczs\" (UniqueName: \"kubernetes.io/projected/dc6d1720-c37f-4501-bbb1-16f507bc1126-kube-api-access-9bczs\") pod \"glance-default-internal-api-0\" (UID: \"dc6d1720-c37f-4501-bbb1-16f507bc1126\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.195658 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e3d141e-c4bd-479f-998d-a3ecfcf87156-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-vhqcg\" (UID: \"3e3d141e-c4bd-479f-998d-a3ecfcf87156\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vhqcg" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.195696 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc6d1720-c37f-4501-bbb1-16f507bc1126-scripts\") pod \"glance-default-internal-api-0\" (UID: \"dc6d1720-c37f-4501-bbb1-16f507bc1126\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.195746 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc6d1720-c37f-4501-bbb1-16f507bc1126-logs\") pod \"glance-default-internal-api-0\" (UID: \"dc6d1720-c37f-4501-bbb1-16f507bc1126\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.195780 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3e3d141e-c4bd-479f-998d-a3ecfcf87156-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-vhqcg\" (UID: \"3e3d141e-c4bd-479f-998d-a3ecfcf87156\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vhqcg" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.195811 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc6d1720-c37f-4501-bbb1-16f507bc1126-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"dc6d1720-c37f-4501-bbb1-16f507bc1126\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.195840 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"dc6d1720-c37f-4501-bbb1-16f507bc1126\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.195880 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e3d141e-c4bd-479f-998d-a3ecfcf87156-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-vhqcg\" (UID: \"3e3d141e-c4bd-479f-998d-a3ecfcf87156\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vhqcg" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.195921 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/08ef6ec0-ba09-40a2-94d0-a1ddbba8644a-db-sync-config-data\") pod \"barbican-db-sync-v6lvb\" (UID: \"08ef6ec0-ba09-40a2-94d0-a1ddbba8644a\") " pod="openstack/barbican-db-sync-v6lvb" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.195944 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zgll\" (UniqueName: \"kubernetes.io/projected/08ef6ec0-ba09-40a2-94d0-a1ddbba8644a-kube-api-access-7zgll\") pod \"barbican-db-sync-v6lvb\" (UID: \"08ef6ec0-ba09-40a2-94d0-a1ddbba8644a\") " pod="openstack/barbican-db-sync-v6lvb" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.195968 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e3d141e-c4bd-479f-998d-a3ecfcf87156-config\") pod \"dnsmasq-dns-785d8bcb8c-vhqcg\" (UID: \"3e3d141e-c4bd-479f-998d-a3ecfcf87156\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vhqcg" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.195999 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e3d141e-c4bd-479f-998d-a3ecfcf87156-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-vhqcg\" (UID: \"3e3d141e-c4bd-479f-998d-a3ecfcf87156\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vhqcg" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.196048 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6d1720-c37f-4501-bbb1-16f507bc1126-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"dc6d1720-c37f-4501-bbb1-16f507bc1126\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.196073 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6d1720-c37f-4501-bbb1-16f507bc1126-config-data\") pod \"glance-default-internal-api-0\" (UID: \"dc6d1720-c37f-4501-bbb1-16f507bc1126\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.196118 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08ef6ec0-ba09-40a2-94d0-a1ddbba8644a-combined-ca-bundle\") pod \"barbican-db-sync-v6lvb\" (UID: \"08ef6ec0-ba09-40a2-94d0-a1ddbba8644a\") " pod="openstack/barbican-db-sync-v6lvb" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.196155 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6d1720-c37f-4501-bbb1-16f507bc1126-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"dc6d1720-c37f-4501-bbb1-16f507bc1126\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.196191 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgwhj\" (UniqueName: \"kubernetes.io/projected/3e3d141e-c4bd-479f-998d-a3ecfcf87156-kube-api-access-zgwhj\") pod \"dnsmasq-dns-785d8bcb8c-vhqcg\" (UID: \"3e3d141e-c4bd-479f-998d-a3ecfcf87156\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vhqcg" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.197987 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e3d141e-c4bd-479f-998d-a3ecfcf87156-config\") pod \"dnsmasq-dns-785d8bcb8c-vhqcg\" (UID: \"3e3d141e-c4bd-479f-998d-a3ecfcf87156\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vhqcg" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.198729 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e3d141e-c4bd-479f-998d-a3ecfcf87156-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-vhqcg\" (UID: \"3e3d141e-c4bd-479f-998d-a3ecfcf87156\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vhqcg" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.200579 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/08ef6ec0-ba09-40a2-94d0-a1ddbba8644a-db-sync-config-data\") pod \"barbican-db-sync-v6lvb\" (UID: \"08ef6ec0-ba09-40a2-94d0-a1ddbba8644a\") " pod="openstack/barbican-db-sync-v6lvb" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.201769 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3e3d141e-c4bd-479f-998d-a3ecfcf87156-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-vhqcg\" (UID: \"3e3d141e-c4bd-479f-998d-a3ecfcf87156\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vhqcg" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.202147 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc6d1720-c37f-4501-bbb1-16f507bc1126-logs\") pod \"glance-default-internal-api-0\" (UID: \"dc6d1720-c37f-4501-bbb1-16f507bc1126\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.202256 4706 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"dc6d1720-c37f-4501-bbb1-16f507bc1126\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.204106 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e3d141e-c4bd-479f-998d-a3ecfcf87156-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-vhqcg\" (UID: \"3e3d141e-c4bd-479f-998d-a3ecfcf87156\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vhqcg" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.204375 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc6d1720-c37f-4501-bbb1-16f507bc1126-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"dc6d1720-c37f-4501-bbb1-16f507bc1126\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.205167 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6d1720-c37f-4501-bbb1-16f507bc1126-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"dc6d1720-c37f-4501-bbb1-16f507bc1126\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.206171 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08ef6ec0-ba09-40a2-94d0-a1ddbba8644a-combined-ca-bundle\") pod \"barbican-db-sync-v6lvb\" (UID: \"08ef6ec0-ba09-40a2-94d0-a1ddbba8644a\") " pod="openstack/barbican-db-sync-v6lvb" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.207711 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6d1720-c37f-4501-bbb1-16f507bc1126-config-data\") pod \"glance-default-internal-api-0\" (UID: \"dc6d1720-c37f-4501-bbb1-16f507bc1126\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.210022 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc6d1720-c37f-4501-bbb1-16f507bc1126-scripts\") pod \"glance-default-internal-api-0\" (UID: \"dc6d1720-c37f-4501-bbb1-16f507bc1126\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.211238 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6d1720-c37f-4501-bbb1-16f507bc1126-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"dc6d1720-c37f-4501-bbb1-16f507bc1126\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.216648 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e3d141e-c4bd-479f-998d-a3ecfcf87156-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-vhqcg\" (UID: \"3e3d141e-c4bd-479f-998d-a3ecfcf87156\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vhqcg" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.219038 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zgll\" (UniqueName: \"kubernetes.io/projected/08ef6ec0-ba09-40a2-94d0-a1ddbba8644a-kube-api-access-7zgll\") pod \"barbican-db-sync-v6lvb\" (UID: \"08ef6ec0-ba09-40a2-94d0-a1ddbba8644a\") " pod="openstack/barbican-db-sync-v6lvb" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.224047 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bczs\" (UniqueName: \"kubernetes.io/projected/dc6d1720-c37f-4501-bbb1-16f507bc1126-kube-api-access-9bczs\") pod \"glance-default-internal-api-0\" (UID: \"dc6d1720-c37f-4501-bbb1-16f507bc1126\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.236019 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgwhj\" (UniqueName: \"kubernetes.io/projected/3e3d141e-c4bd-479f-998d-a3ecfcf87156-kube-api-access-zgwhj\") pod \"dnsmasq-dns-785d8bcb8c-vhqcg\" (UID: \"3e3d141e-c4bd-479f-998d-a3ecfcf87156\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vhqcg" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.260468 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-v6lvb" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.303856 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-vhqcg" Nov 25 11:55:36 crc kubenswrapper[4706]: I1125 11:55:36.334748 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"dc6d1720-c37f-4501-bbb1-16f507bc1126\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:36.423409 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-m2vpm"] Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:36.552523 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lslv5"] Nov 25 11:55:38 crc kubenswrapper[4706]: W1125 11:55:36.573793 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb5e4015_f047_4386_b88d_b7b0c2a0878b.slice/crio-820e27ba7dbcc0cabd2ea6ee2f57e63debedbcd24a32b1f182025628f2991753 WatchSource:0}: Error finding container 820e27ba7dbcc0cabd2ea6ee2f57e63debedbcd24a32b1f182025628f2991753: Status 404 returned error can't find the container with id 820e27ba7dbcc0cabd2ea6ee2f57e63debedbcd24a32b1f182025628f2991753 Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:36.633173 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:36.932612 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-m2vpm" event={"ID":"3c5619c3-04a0-486b-9c75-201492f3a322","Type":"ContainerStarted","Data":"a35286a858d56ccb7b6dfed6ae0ed7c03aa3a51d746c2037f4a60b588b13ffef"} Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:36.940766 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lslv5" event={"ID":"fb5e4015-f047-4386-b88d-b7b0c2a0878b","Type":"ContainerStarted","Data":"820e27ba7dbcc0cabd2ea6ee2f57e63debedbcd24a32b1f182025628f2991753"} Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:37.713959 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:37.739566 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6899b4bd6f-vwrfh"] Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:37.774022 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6f66ccf8d9-g7z69"] Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:37.776010 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f66ccf8d9-g7z69" Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:37.787172 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f66ccf8d9-g7z69"] Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:37.813800 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:37.858877 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5vmm\" (UniqueName: \"kubernetes.io/projected/a2972ef2-0543-48bd-9982-4f1c88711e0d-kube-api-access-x5vmm\") pod \"horizon-6f66ccf8d9-g7z69\" (UID: \"a2972ef2-0543-48bd-9982-4f1c88711e0d\") " pod="openstack/horizon-6f66ccf8d9-g7z69" Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:37.858920 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a2972ef2-0543-48bd-9982-4f1c88711e0d-horizon-secret-key\") pod \"horizon-6f66ccf8d9-g7z69\" (UID: \"a2972ef2-0543-48bd-9982-4f1c88711e0d\") " pod="openstack/horizon-6f66ccf8d9-g7z69" Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:37.858983 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2972ef2-0543-48bd-9982-4f1c88711e0d-scripts\") pod \"horizon-6f66ccf8d9-g7z69\" (UID: \"a2972ef2-0543-48bd-9982-4f1c88711e0d\") " pod="openstack/horizon-6f66ccf8d9-g7z69" Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:37.859068 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2972ef2-0543-48bd-9982-4f1c88711e0d-logs\") pod \"horizon-6f66ccf8d9-g7z69\" (UID: \"a2972ef2-0543-48bd-9982-4f1c88711e0d\") " pod="openstack/horizon-6f66ccf8d9-g7z69" Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:37.859204 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2972ef2-0543-48bd-9982-4f1c88711e0d-config-data\") pod \"horizon-6f66ccf8d9-g7z69\" (UID: \"a2972ef2-0543-48bd-9982-4f1c88711e0d\") " pod="openstack/horizon-6f66ccf8d9-g7z69" Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:37.950186 4706 generic.go:334] "Generic (PLEG): container finished" podID="3c5619c3-04a0-486b-9c75-201492f3a322" containerID="73bfc5ccf4ae9c2f1182d75a1806e5fc1ff490492c4943b169bb2afebec9edf9" exitCode=0 Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:37.950237 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-m2vpm" event={"ID":"3c5619c3-04a0-486b-9c75-201492f3a322","Type":"ContainerDied","Data":"73bfc5ccf4ae9c2f1182d75a1806e5fc1ff490492c4943b169bb2afebec9edf9"} Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:37.952189 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lslv5" event={"ID":"fb5e4015-f047-4386-b88d-b7b0c2a0878b","Type":"ContainerStarted","Data":"1cd5443cc641ed5ad034f2ef8a5282a873c09693bb609a311ea6ea3f1ace6bcf"} Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:37.961088 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2972ef2-0543-48bd-9982-4f1c88711e0d-logs\") pod \"horizon-6f66ccf8d9-g7z69\" (UID: \"a2972ef2-0543-48bd-9982-4f1c88711e0d\") " pod="openstack/horizon-6f66ccf8d9-g7z69" Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:37.961153 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2972ef2-0543-48bd-9982-4f1c88711e0d-config-data\") pod \"horizon-6f66ccf8d9-g7z69\" (UID: \"a2972ef2-0543-48bd-9982-4f1c88711e0d\") " pod="openstack/horizon-6f66ccf8d9-g7z69" Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:37.961278 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5vmm\" (UniqueName: \"kubernetes.io/projected/a2972ef2-0543-48bd-9982-4f1c88711e0d-kube-api-access-x5vmm\") pod \"horizon-6f66ccf8d9-g7z69\" (UID: \"a2972ef2-0543-48bd-9982-4f1c88711e0d\") " pod="openstack/horizon-6f66ccf8d9-g7z69" Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:37.961329 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a2972ef2-0543-48bd-9982-4f1c88711e0d-horizon-secret-key\") pod \"horizon-6f66ccf8d9-g7z69\" (UID: \"a2972ef2-0543-48bd-9982-4f1c88711e0d\") " pod="openstack/horizon-6f66ccf8d9-g7z69" Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:37.961358 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2972ef2-0543-48bd-9982-4f1c88711e0d-scripts\") pod \"horizon-6f66ccf8d9-g7z69\" (UID: \"a2972ef2-0543-48bd-9982-4f1c88711e0d\") " pod="openstack/horizon-6f66ccf8d9-g7z69" Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:37.962156 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2972ef2-0543-48bd-9982-4f1c88711e0d-scripts\") pod \"horizon-6f66ccf8d9-g7z69\" (UID: \"a2972ef2-0543-48bd-9982-4f1c88711e0d\") " pod="openstack/horizon-6f66ccf8d9-g7z69" Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:37.962517 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2972ef2-0543-48bd-9982-4f1c88711e0d-logs\") pod \"horizon-6f66ccf8d9-g7z69\" (UID: \"a2972ef2-0543-48bd-9982-4f1c88711e0d\") " pod="openstack/horizon-6f66ccf8d9-g7z69" Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:37.964294 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2972ef2-0543-48bd-9982-4f1c88711e0d-config-data\") pod \"horizon-6f66ccf8d9-g7z69\" (UID: \"a2972ef2-0543-48bd-9982-4f1c88711e0d\") " pod="openstack/horizon-6f66ccf8d9-g7z69" Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:37.996357 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a2972ef2-0543-48bd-9982-4f1c88711e0d-horizon-secret-key\") pod \"horizon-6f66ccf8d9-g7z69\" (UID: \"a2972ef2-0543-48bd-9982-4f1c88711e0d\") " pod="openstack/horizon-6f66ccf8d9-g7z69" Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:38.002108 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5vmm\" (UniqueName: \"kubernetes.io/projected/a2972ef2-0543-48bd-9982-4f1c88711e0d-kube-api-access-x5vmm\") pod \"horizon-6f66ccf8d9-g7z69\" (UID: \"a2972ef2-0543-48bd-9982-4f1c88711e0d\") " pod="openstack/horizon-6f66ccf8d9-g7z69" Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:38.095705 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f66ccf8d9-g7z69" Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:38.413249 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-lslv5" podStartSLOduration=3.413227296 podStartE2EDuration="3.413227296s" podCreationTimestamp="2025-11-25 11:55:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:55:38.00254776 +0000 UTC m=+1146.917105141" watchObservedRunningTime="2025-11-25 11:55:38.413227296 +0000 UTC m=+1147.327784677" Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:38.419017 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:38.532349 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-fd7sf"] Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:38.547749 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-78549bf5d5-rtlzb"] Nov 25 11:55:38 crc kubenswrapper[4706]: W1125 11:55:38.549604 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcba2657d_39a9_4556_abec_412b63df6c94.slice/crio-0e072bd87363cee7e314cea252164ba36abb583cb5b5e935658cf624e7bbe94f WatchSource:0}: Error finding container 0e072bd87363cee7e314cea252164ba36abb583cb5b5e935658cf624e7bbe94f: Status 404 returned error can't find the container with id 0e072bd87363cee7e314cea252164ba36abb583cb5b5e935658cf624e7bbe94f Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:38.562853 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-m2vpm" Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:38.566753 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:38.685525 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c5619c3-04a0-486b-9c75-201492f3a322-config\") pod \"3c5619c3-04a0-486b-9c75-201492f3a322\" (UID: \"3c5619c3-04a0-486b-9c75-201492f3a322\") " Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:38.685619 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c5619c3-04a0-486b-9c75-201492f3a322-ovsdbserver-nb\") pod \"3c5619c3-04a0-486b-9c75-201492f3a322\" (UID: \"3c5619c3-04a0-486b-9c75-201492f3a322\") " Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:38.685642 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c5619c3-04a0-486b-9c75-201492f3a322-ovsdbserver-sb\") pod \"3c5619c3-04a0-486b-9c75-201492f3a322\" (UID: \"3c5619c3-04a0-486b-9c75-201492f3a322\") " Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:38.685725 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c5619c3-04a0-486b-9c75-201492f3a322-dns-svc\") pod \"3c5619c3-04a0-486b-9c75-201492f3a322\" (UID: \"3c5619c3-04a0-486b-9c75-201492f3a322\") " Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:38.685801 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9864b\" (UniqueName: \"kubernetes.io/projected/3c5619c3-04a0-486b-9c75-201492f3a322-kube-api-access-9864b\") pod \"3c5619c3-04a0-486b-9c75-201492f3a322\" (UID: \"3c5619c3-04a0-486b-9c75-201492f3a322\") " Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:38.686030 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c5619c3-04a0-486b-9c75-201492f3a322-dns-swift-storage-0\") pod \"3c5619c3-04a0-486b-9c75-201492f3a322\" (UID: \"3c5619c3-04a0-486b-9c75-201492f3a322\") " Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:38.705291 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-hdbbw"] Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:38.714544 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c5619c3-04a0-486b-9c75-201492f3a322-kube-api-access-9864b" (OuterVolumeSpecName: "kube-api-access-9864b") pod "3c5619c3-04a0-486b-9c75-201492f3a322" (UID: "3c5619c3-04a0-486b-9c75-201492f3a322"). InnerVolumeSpecName "kube-api-access-9864b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:38.724559 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c5619c3-04a0-486b-9c75-201492f3a322-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3c5619c3-04a0-486b-9c75-201492f3a322" (UID: "3c5619c3-04a0-486b-9c75-201492f3a322"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:38.724582 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c5619c3-04a0-486b-9c75-201492f3a322-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3c5619c3-04a0-486b-9c75-201492f3a322" (UID: "3c5619c3-04a0-486b-9c75-201492f3a322"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:38.724745 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c5619c3-04a0-486b-9c75-201492f3a322-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3c5619c3-04a0-486b-9c75-201492f3a322" (UID: "3c5619c3-04a0-486b-9c75-201492f3a322"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:38.728773 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c5619c3-04a0-486b-9c75-201492f3a322-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3c5619c3-04a0-486b-9c75-201492f3a322" (UID: "3c5619c3-04a0-486b-9c75-201492f3a322"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:38.737872 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c5619c3-04a0-486b-9c75-201492f3a322-config" (OuterVolumeSpecName: "config") pod "3c5619c3-04a0-486b-9c75-201492f3a322" (UID: "3c5619c3-04a0-486b-9c75-201492f3a322"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:38.799743 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9864b\" (UniqueName: \"kubernetes.io/projected/3c5619c3-04a0-486b-9c75-201492f3a322-kube-api-access-9864b\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:38.800108 4706 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c5619c3-04a0-486b-9c75-201492f3a322-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:38.800152 4706 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c5619c3-04a0-486b-9c75-201492f3a322-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:38.800166 4706 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c5619c3-04a0-486b-9c75-201492f3a322-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:38.800179 4706 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c5619c3-04a0-486b-9c75-201492f3a322-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:38.800190 4706 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c5619c3-04a0-486b-9c75-201492f3a322-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:38.923046 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6899b4bd6f-vwrfh"] Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:38.929989 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-ntkr9"] Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:38.950439 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-vhqcg"] Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:38.966587 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-v6lvb"] Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:38.978618 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ntkr9" event={"ID":"fff3e0d5-0608-4e15-9a92-376b6a2b7d17","Type":"ContainerStarted","Data":"c0f9fa42b710cbeabc270be3787e6cbb65cf5c657bbb33d07043233eb7c0be34"} Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:38.983561 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hdbbw" event={"ID":"27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf","Type":"ContainerStarted","Data":"69b75dc8ced52c1b496484cab28676106b2584ed034f5af05537be0814a73094"} Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:38.983603 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hdbbw" event={"ID":"27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf","Type":"ContainerStarted","Data":"24da31dada44e6f20e6e6f10fd7b5aa6a25b5647da33550051402225dcffd3bb"} Nov 25 11:55:38 crc kubenswrapper[4706]: I1125 11:55:38.998353 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f66ccf8d9-g7z69"] Nov 25 11:55:38 crc kubenswrapper[4706]: W1125 11:55:38.998854 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2972ef2_0543_48bd_9982_4f1c88711e0d.slice/crio-f2eb1430ae89e9d9827ec74a43c3f19f436d90232cfcbecd3fda70e64a994340 WatchSource:0}: Error finding container f2eb1430ae89e9d9827ec74a43c3f19f436d90232cfcbecd3fda70e64a994340: Status 404 returned error can't find the container with id f2eb1430ae89e9d9827ec74a43c3f19f436d90232cfcbecd3fda70e64a994340 Nov 25 11:55:39 crc kubenswrapper[4706]: I1125 11:55:39.000812 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db4e7aed-28ec-49cd-8f0b-e01df112bf54","Type":"ContainerStarted","Data":"955cb3fc2e165c948c48331956713b1450de967f17f453c17e3c8ee3c435554a"} Nov 25 11:55:39 crc kubenswrapper[4706]: I1125 11:55:39.010321 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-m2vpm" Nov 25 11:55:39 crc kubenswrapper[4706]: I1125 11:55:39.010753 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-m2vpm" event={"ID":"3c5619c3-04a0-486b-9c75-201492f3a322","Type":"ContainerDied","Data":"a35286a858d56ccb7b6dfed6ae0ed7c03aa3a51d746c2037f4a60b588b13ffef"} Nov 25 11:55:39 crc kubenswrapper[4706]: I1125 11:55:39.010833 4706 scope.go:117] "RemoveContainer" containerID="73bfc5ccf4ae9c2f1182d75a1806e5fc1ff490492c4943b169bb2afebec9edf9" Nov 25 11:55:39 crc kubenswrapper[4706]: I1125 11:55:39.017929 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78549bf5d5-rtlzb" event={"ID":"cba2657d-39a9-4556-abec-412b63df6c94","Type":"ContainerStarted","Data":"0e072bd87363cee7e314cea252164ba36abb583cb5b5e935658cf624e7bbe94f"} Nov 25 11:55:39 crc kubenswrapper[4706]: I1125 11:55:39.019018 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6899b4bd6f-vwrfh" event={"ID":"c785321d-b637-4f3a-9e69-bc237eb1e9c2","Type":"ContainerStarted","Data":"5eaa56f42f6412675dc9c60f4529f3d1f87ca00e542a17c07d190c59afc633c3"} Nov 25 11:55:39 crc kubenswrapper[4706]: I1125 11:55:39.020074 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-fd7sf" event={"ID":"424f303d-41b7-4fd6-be4a-017148ed95da","Type":"ContainerStarted","Data":"15a1b4a846ce3378a6f418aa01a64b670ecf60b1f4848afd4675e03bcaad9ae8"} Nov 25 11:55:39 crc kubenswrapper[4706]: I1125 11:55:39.021285 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-vhqcg" event={"ID":"3e3d141e-c4bd-479f-998d-a3ecfcf87156","Type":"ContainerStarted","Data":"679ecb1e74993b3f971e280018c9c610d1bf4e1b24eef64f5a75a637d1a9e1aa"} Nov 25 11:55:39 crc kubenswrapper[4706]: I1125 11:55:39.029515 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-hdbbw" podStartSLOduration=4.029496175 podStartE2EDuration="4.029496175s" podCreationTimestamp="2025-11-25 11:55:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:55:39.008489136 +0000 UTC m=+1147.923046517" watchObservedRunningTime="2025-11-25 11:55:39.029496175 +0000 UTC m=+1147.944053556" Nov 25 11:55:39 crc kubenswrapper[4706]: I1125 11:55:39.060971 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 11:55:39 crc kubenswrapper[4706]: I1125 11:55:39.288837 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-m2vpm"] Nov 25 11:55:39 crc kubenswrapper[4706]: I1125 11:55:39.295653 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-m2vpm"] Nov 25 11:55:39 crc kubenswrapper[4706]: I1125 11:55:39.889446 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 11:55:39 crc kubenswrapper[4706]: W1125 11:55:39.911513 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0c3d5f1_1ac9_4f5f_bef2_232cf6055061.slice/crio-26dcd8c8ed9f7cdb556012055cd1f066e185d065d3e7441226c346cc483a321a WatchSource:0}: Error finding container 26dcd8c8ed9f7cdb556012055cd1f066e185d065d3e7441226c346cc483a321a: Status 404 returned error can't find the container with id 26dcd8c8ed9f7cdb556012055cd1f066e185d065d3e7441226c346cc483a321a Nov 25 11:55:39 crc kubenswrapper[4706]: I1125 11:55:39.937765 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c5619c3-04a0-486b-9c75-201492f3a322" path="/var/lib/kubelet/pods/3c5619c3-04a0-486b-9c75-201492f3a322/volumes" Nov 25 11:55:40 crc kubenswrapper[4706]: I1125 11:55:40.036898 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f66ccf8d9-g7z69" event={"ID":"a2972ef2-0543-48bd-9982-4f1c88711e0d","Type":"ContainerStarted","Data":"f2eb1430ae89e9d9827ec74a43c3f19f436d90232cfcbecd3fda70e64a994340"} Nov 25 11:55:40 crc kubenswrapper[4706]: I1125 11:55:40.042182 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061","Type":"ContainerStarted","Data":"26dcd8c8ed9f7cdb556012055cd1f066e185d065d3e7441226c346cc483a321a"} Nov 25 11:55:40 crc kubenswrapper[4706]: I1125 11:55:40.044109 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dc6d1720-c37f-4501-bbb1-16f507bc1126","Type":"ContainerStarted","Data":"c7ce584d8ee77b8e5b732e12afed33cfd07f39407d3ad1a3693457c0fa7f717e"} Nov 25 11:55:40 crc kubenswrapper[4706]: I1125 11:55:40.044154 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dc6d1720-c37f-4501-bbb1-16f507bc1126","Type":"ContainerStarted","Data":"1d30d85110ff376a33d87db7563e5684aada2ba8c86fb21b726f8b4c86c10b00"} Nov 25 11:55:40 crc kubenswrapper[4706]: I1125 11:55:40.059256 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-v6lvb" event={"ID":"08ef6ec0-ba09-40a2-94d0-a1ddbba8644a","Type":"ContainerStarted","Data":"023f91948ac374bc83b0ff75394095462e4880da46dd64744048e7c8174c282e"} Nov 25 11:55:40 crc kubenswrapper[4706]: I1125 11:55:40.061442 4706 generic.go:334] "Generic (PLEG): container finished" podID="3e3d141e-c4bd-479f-998d-a3ecfcf87156" containerID="913d4321d424e69a6bdcfbd8200e69aa3977bf6954e3a6a96d637ecff3fcf51f" exitCode=0 Nov 25 11:55:40 crc kubenswrapper[4706]: I1125 11:55:40.061648 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-vhqcg" event={"ID":"3e3d141e-c4bd-479f-998d-a3ecfcf87156","Type":"ContainerDied","Data":"913d4321d424e69a6bdcfbd8200e69aa3977bf6954e3a6a96d637ecff3fcf51f"} Nov 25 11:55:41 crc kubenswrapper[4706]: I1125 11:55:41.078511 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061","Type":"ContainerStarted","Data":"8335a64cf57e1eae1c577f9abff5ccb7be9057b998f1067a70800eef4c087ea6"} Nov 25 11:55:41 crc kubenswrapper[4706]: I1125 11:55:41.087509 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-vhqcg" event={"ID":"3e3d141e-c4bd-479f-998d-a3ecfcf87156","Type":"ContainerStarted","Data":"c646a9abef8d5cb12444aeaed4a6d33c4f4e34dd5b4a8eee3c936cc5f06db823"} Nov 25 11:55:41 crc kubenswrapper[4706]: I1125 11:55:41.088128 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-vhqcg" Nov 25 11:55:41 crc kubenswrapper[4706]: I1125 11:55:41.096162 4706 generic.go:334] "Generic (PLEG): container finished" podID="fb5e4015-f047-4386-b88d-b7b0c2a0878b" containerID="1cd5443cc641ed5ad034f2ef8a5282a873c09693bb609a311ea6ea3f1ace6bcf" exitCode=0 Nov 25 11:55:41 crc kubenswrapper[4706]: I1125 11:55:41.096215 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lslv5" event={"ID":"fb5e4015-f047-4386-b88d-b7b0c2a0878b","Type":"ContainerDied","Data":"1cd5443cc641ed5ad034f2ef8a5282a873c09693bb609a311ea6ea3f1ace6bcf"} Nov 25 11:55:41 crc kubenswrapper[4706]: I1125 11:55:41.127166 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-vhqcg" podStartSLOduration=6.127143337 podStartE2EDuration="6.127143337s" podCreationTimestamp="2025-11-25 11:55:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:55:41.115438993 +0000 UTC m=+1150.029996394" watchObservedRunningTime="2025-11-25 11:55:41.127143337 +0000 UTC m=+1150.041700718" Nov 25 11:55:42 crc kubenswrapper[4706]: I1125 11:55:42.111741 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dc6d1720-c37f-4501-bbb1-16f507bc1126","Type":"ContainerStarted","Data":"50fc19dbc12030830b7f9abe1db59f12002a214f5583433dbe4de236c044a6f1"} Nov 25 11:55:42 crc kubenswrapper[4706]: I1125 11:55:42.111864 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="dc6d1720-c37f-4501-bbb1-16f507bc1126" containerName="glance-log" containerID="cri-o://c7ce584d8ee77b8e5b732e12afed33cfd07f39407d3ad1a3693457c0fa7f717e" gracePeriod=30 Nov 25 11:55:42 crc kubenswrapper[4706]: I1125 11:55:42.111926 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="dc6d1720-c37f-4501-bbb1-16f507bc1126" containerName="glance-httpd" containerID="cri-o://50fc19dbc12030830b7f9abe1db59f12002a214f5583433dbe4de236c044a6f1" gracePeriod=30 Nov 25 11:55:42 crc kubenswrapper[4706]: I1125 11:55:42.168888 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.168864315 podStartE2EDuration="7.168864315s" podCreationTimestamp="2025-11-25 11:55:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:55:42.165818258 +0000 UTC m=+1151.080375649" watchObservedRunningTime="2025-11-25 11:55:42.168864315 +0000 UTC m=+1151.083421696" Nov 25 11:55:43 crc kubenswrapper[4706]: I1125 11:55:43.124580 4706 generic.go:334] "Generic (PLEG): container finished" podID="dc6d1720-c37f-4501-bbb1-16f507bc1126" containerID="50fc19dbc12030830b7f9abe1db59f12002a214f5583433dbe4de236c044a6f1" exitCode=0 Nov 25 11:55:43 crc kubenswrapper[4706]: I1125 11:55:43.125193 4706 generic.go:334] "Generic (PLEG): container finished" podID="dc6d1720-c37f-4501-bbb1-16f507bc1126" containerID="c7ce584d8ee77b8e5b732e12afed33cfd07f39407d3ad1a3693457c0fa7f717e" exitCode=143 Nov 25 11:55:43 crc kubenswrapper[4706]: I1125 11:55:43.124665 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dc6d1720-c37f-4501-bbb1-16f507bc1126","Type":"ContainerDied","Data":"50fc19dbc12030830b7f9abe1db59f12002a214f5583433dbe4de236c044a6f1"} Nov 25 11:55:43 crc kubenswrapper[4706]: I1125 11:55:43.125272 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dc6d1720-c37f-4501-bbb1-16f507bc1126","Type":"ContainerDied","Data":"c7ce584d8ee77b8e5b732e12afed33cfd07f39407d3ad1a3693457c0fa7f717e"} Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.292214 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-78549bf5d5-rtlzb"] Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.342354 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5d6465f55b-zdrth"] Nov 25 11:55:44 crc kubenswrapper[4706]: E1125 11:55:44.342830 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c5619c3-04a0-486b-9c75-201492f3a322" containerName="init" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.342855 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c5619c3-04a0-486b-9c75-201492f3a322" containerName="init" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.343086 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c5619c3-04a0-486b-9c75-201492f3a322" containerName="init" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.344819 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d6465f55b-zdrth" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.352458 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.379317 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5d6465f55b-zdrth"] Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.401293 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f66ccf8d9-g7z69"] Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.416579 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-85664bf4f6-ws67w"] Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.419416 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85664bf4f6-ws67w" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.426939 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-85664bf4f6-ws67w"] Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.438289 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74b33eb1-0020-4037-918c-9e747dcfd61f-horizon-secret-key\") pod \"horizon-5d6465f55b-zdrth\" (UID: \"74b33eb1-0020-4037-918c-9e747dcfd61f\") " pod="openstack/horizon-5d6465f55b-zdrth" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.438352 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/74b33eb1-0020-4037-918c-9e747dcfd61f-horizon-tls-certs\") pod \"horizon-5d6465f55b-zdrth\" (UID: \"74b33eb1-0020-4037-918c-9e747dcfd61f\") " pod="openstack/horizon-5d6465f55b-zdrth" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.438394 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74b33eb1-0020-4037-918c-9e747dcfd61f-logs\") pod \"horizon-5d6465f55b-zdrth\" (UID: \"74b33eb1-0020-4037-918c-9e747dcfd61f\") " pod="openstack/horizon-5d6465f55b-zdrth" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.438427 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gp4v\" (UniqueName: \"kubernetes.io/projected/74b33eb1-0020-4037-918c-9e747dcfd61f-kube-api-access-2gp4v\") pod \"horizon-5d6465f55b-zdrth\" (UID: \"74b33eb1-0020-4037-918c-9e747dcfd61f\") " pod="openstack/horizon-5d6465f55b-zdrth" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.438457 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74b33eb1-0020-4037-918c-9e747dcfd61f-config-data\") pod \"horizon-5d6465f55b-zdrth\" (UID: \"74b33eb1-0020-4037-918c-9e747dcfd61f\") " pod="openstack/horizon-5d6465f55b-zdrth" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.438490 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74b33eb1-0020-4037-918c-9e747dcfd61f-combined-ca-bundle\") pod \"horizon-5d6465f55b-zdrth\" (UID: \"74b33eb1-0020-4037-918c-9e747dcfd61f\") " pod="openstack/horizon-5d6465f55b-zdrth" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.438508 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74b33eb1-0020-4037-918c-9e747dcfd61f-scripts\") pod \"horizon-5d6465f55b-zdrth\" (UID: \"74b33eb1-0020-4037-918c-9e747dcfd61f\") " pod="openstack/horizon-5d6465f55b-zdrth" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.540764 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74b33eb1-0020-4037-918c-9e747dcfd61f-logs\") pod \"horizon-5d6465f55b-zdrth\" (UID: \"74b33eb1-0020-4037-918c-9e747dcfd61f\") " pod="openstack/horizon-5d6465f55b-zdrth" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.540852 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5-config-data\") pod \"horizon-85664bf4f6-ws67w\" (UID: \"66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5\") " pod="openstack/horizon-85664bf4f6-ws67w" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.540881 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5-horizon-secret-key\") pod \"horizon-85664bf4f6-ws67w\" (UID: \"66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5\") " pod="openstack/horizon-85664bf4f6-ws67w" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.540909 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gp4v\" (UniqueName: \"kubernetes.io/projected/74b33eb1-0020-4037-918c-9e747dcfd61f-kube-api-access-2gp4v\") pod \"horizon-5d6465f55b-zdrth\" (UID: \"74b33eb1-0020-4037-918c-9e747dcfd61f\") " pod="openstack/horizon-5d6465f55b-zdrth" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.540954 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74b33eb1-0020-4037-918c-9e747dcfd61f-config-data\") pod \"horizon-5d6465f55b-zdrth\" (UID: \"74b33eb1-0020-4037-918c-9e747dcfd61f\") " pod="openstack/horizon-5d6465f55b-zdrth" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.540997 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74b33eb1-0020-4037-918c-9e747dcfd61f-combined-ca-bundle\") pod \"horizon-5d6465f55b-zdrth\" (UID: \"74b33eb1-0020-4037-918c-9e747dcfd61f\") " pod="openstack/horizon-5d6465f55b-zdrth" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.541023 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74b33eb1-0020-4037-918c-9e747dcfd61f-scripts\") pod \"horizon-5d6465f55b-zdrth\" (UID: \"74b33eb1-0020-4037-918c-9e747dcfd61f\") " pod="openstack/horizon-5d6465f55b-zdrth" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.541082 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5-logs\") pod \"horizon-85664bf4f6-ws67w\" (UID: \"66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5\") " pod="openstack/horizon-85664bf4f6-ws67w" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.541118 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc7hg\" (UniqueName: \"kubernetes.io/projected/66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5-kube-api-access-zc7hg\") pod \"horizon-85664bf4f6-ws67w\" (UID: \"66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5\") " pod="openstack/horizon-85664bf4f6-ws67w" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.541152 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5-scripts\") pod \"horizon-85664bf4f6-ws67w\" (UID: \"66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5\") " pod="openstack/horizon-85664bf4f6-ws67w" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.541179 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5-combined-ca-bundle\") pod \"horizon-85664bf4f6-ws67w\" (UID: \"66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5\") " pod="openstack/horizon-85664bf4f6-ws67w" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.541206 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74b33eb1-0020-4037-918c-9e747dcfd61f-horizon-secret-key\") pod \"horizon-5d6465f55b-zdrth\" (UID: \"74b33eb1-0020-4037-918c-9e747dcfd61f\") " pod="openstack/horizon-5d6465f55b-zdrth" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.541254 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5-horizon-tls-certs\") pod \"horizon-85664bf4f6-ws67w\" (UID: \"66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5\") " pod="openstack/horizon-85664bf4f6-ws67w" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.541283 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/74b33eb1-0020-4037-918c-9e747dcfd61f-horizon-tls-certs\") pod \"horizon-5d6465f55b-zdrth\" (UID: \"74b33eb1-0020-4037-918c-9e747dcfd61f\") " pod="openstack/horizon-5d6465f55b-zdrth" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.541459 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74b33eb1-0020-4037-918c-9e747dcfd61f-logs\") pod \"horizon-5d6465f55b-zdrth\" (UID: \"74b33eb1-0020-4037-918c-9e747dcfd61f\") " pod="openstack/horizon-5d6465f55b-zdrth" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.542496 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74b33eb1-0020-4037-918c-9e747dcfd61f-scripts\") pod \"horizon-5d6465f55b-zdrth\" (UID: \"74b33eb1-0020-4037-918c-9e747dcfd61f\") " pod="openstack/horizon-5d6465f55b-zdrth" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.544859 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74b33eb1-0020-4037-918c-9e747dcfd61f-config-data\") pod \"horizon-5d6465f55b-zdrth\" (UID: \"74b33eb1-0020-4037-918c-9e747dcfd61f\") " pod="openstack/horizon-5d6465f55b-zdrth" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.549296 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/74b33eb1-0020-4037-918c-9e747dcfd61f-horizon-tls-certs\") pod \"horizon-5d6465f55b-zdrth\" (UID: \"74b33eb1-0020-4037-918c-9e747dcfd61f\") " pod="openstack/horizon-5d6465f55b-zdrth" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.552760 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74b33eb1-0020-4037-918c-9e747dcfd61f-combined-ca-bundle\") pod \"horizon-5d6465f55b-zdrth\" (UID: \"74b33eb1-0020-4037-918c-9e747dcfd61f\") " pod="openstack/horizon-5d6465f55b-zdrth" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.554858 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74b33eb1-0020-4037-918c-9e747dcfd61f-horizon-secret-key\") pod \"horizon-5d6465f55b-zdrth\" (UID: \"74b33eb1-0020-4037-918c-9e747dcfd61f\") " pod="openstack/horizon-5d6465f55b-zdrth" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.565034 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gp4v\" (UniqueName: \"kubernetes.io/projected/74b33eb1-0020-4037-918c-9e747dcfd61f-kube-api-access-2gp4v\") pod \"horizon-5d6465f55b-zdrth\" (UID: \"74b33eb1-0020-4037-918c-9e747dcfd61f\") " pod="openstack/horizon-5d6465f55b-zdrth" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.643377 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5-logs\") pod \"horizon-85664bf4f6-ws67w\" (UID: \"66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5\") " pod="openstack/horizon-85664bf4f6-ws67w" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.643483 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc7hg\" (UniqueName: \"kubernetes.io/projected/66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5-kube-api-access-zc7hg\") pod \"horizon-85664bf4f6-ws67w\" (UID: \"66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5\") " pod="openstack/horizon-85664bf4f6-ws67w" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.643538 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5-scripts\") pod \"horizon-85664bf4f6-ws67w\" (UID: \"66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5\") " pod="openstack/horizon-85664bf4f6-ws67w" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.643584 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5-combined-ca-bundle\") pod \"horizon-85664bf4f6-ws67w\" (UID: \"66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5\") " pod="openstack/horizon-85664bf4f6-ws67w" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.643633 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5-horizon-tls-certs\") pod \"horizon-85664bf4f6-ws67w\" (UID: \"66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5\") " pod="openstack/horizon-85664bf4f6-ws67w" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.643719 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5-config-data\") pod \"horizon-85664bf4f6-ws67w\" (UID: \"66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5\") " pod="openstack/horizon-85664bf4f6-ws67w" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.643747 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5-horizon-secret-key\") pod \"horizon-85664bf4f6-ws67w\" (UID: \"66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5\") " pod="openstack/horizon-85664bf4f6-ws67w" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.644159 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5-logs\") pod \"horizon-85664bf4f6-ws67w\" (UID: \"66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5\") " pod="openstack/horizon-85664bf4f6-ws67w" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.646122 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5-scripts\") pod \"horizon-85664bf4f6-ws67w\" (UID: \"66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5\") " pod="openstack/horizon-85664bf4f6-ws67w" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.646608 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5-config-data\") pod \"horizon-85664bf4f6-ws67w\" (UID: \"66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5\") " pod="openstack/horizon-85664bf4f6-ws67w" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.654411 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5-combined-ca-bundle\") pod \"horizon-85664bf4f6-ws67w\" (UID: \"66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5\") " pod="openstack/horizon-85664bf4f6-ws67w" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.657598 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5-horizon-secret-key\") pod \"horizon-85664bf4f6-ws67w\" (UID: \"66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5\") " pod="openstack/horizon-85664bf4f6-ws67w" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.658691 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5-horizon-tls-certs\") pod \"horizon-85664bf4f6-ws67w\" (UID: \"66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5\") " pod="openstack/horizon-85664bf4f6-ws67w" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.663136 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc7hg\" (UniqueName: \"kubernetes.io/projected/66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5-kube-api-access-zc7hg\") pod \"horizon-85664bf4f6-ws67w\" (UID: \"66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5\") " pod="openstack/horizon-85664bf4f6-ws67w" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.681241 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d6465f55b-zdrth" Nov 25 11:55:44 crc kubenswrapper[4706]: I1125 11:55:44.749690 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85664bf4f6-ws67w" Nov 25 11:55:46 crc kubenswrapper[4706]: I1125 11:55:46.306527 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-vhqcg" Nov 25 11:55:46 crc kubenswrapper[4706]: I1125 11:55:46.371291 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-dqgdx"] Nov 25 11:55:46 crc kubenswrapper[4706]: I1125 11:55:46.371588 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-dqgdx" podUID="d377cf62-3246-4d83-86b8-f55d354a2d5c" containerName="dnsmasq-dns" containerID="cri-o://f1b3b630b5578d49173f9161e395731350d90063332754fe96cefc07384bf022" gracePeriod=10 Nov 25 11:55:47 crc kubenswrapper[4706]: I1125 11:55:47.159449 4706 generic.go:334] "Generic (PLEG): container finished" podID="d377cf62-3246-4d83-86b8-f55d354a2d5c" containerID="f1b3b630b5578d49173f9161e395731350d90063332754fe96cefc07384bf022" exitCode=0 Nov 25 11:55:47 crc kubenswrapper[4706]: I1125 11:55:47.159487 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-dqgdx" event={"ID":"d377cf62-3246-4d83-86b8-f55d354a2d5c","Type":"ContainerDied","Data":"f1b3b630b5578d49173f9161e395731350d90063332754fe96cefc07384bf022"} Nov 25 11:55:49 crc kubenswrapper[4706]: I1125 11:55:49.417125 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-dqgdx" podUID="d377cf62-3246-4d83-86b8-f55d354a2d5c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: connect: connection refused" Nov 25 11:55:51 crc kubenswrapper[4706]: I1125 11:55:51.995881 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lslv5" Nov 25 11:55:52 crc kubenswrapper[4706]: I1125 11:55:52.073895 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb5e4015-f047-4386-b88d-b7b0c2a0878b-scripts\") pod \"fb5e4015-f047-4386-b88d-b7b0c2a0878b\" (UID: \"fb5e4015-f047-4386-b88d-b7b0c2a0878b\") " Nov 25 11:55:52 crc kubenswrapper[4706]: I1125 11:55:52.074053 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fb5e4015-f047-4386-b88d-b7b0c2a0878b-credential-keys\") pod \"fb5e4015-f047-4386-b88d-b7b0c2a0878b\" (UID: \"fb5e4015-f047-4386-b88d-b7b0c2a0878b\") " Nov 25 11:55:52 crc kubenswrapper[4706]: I1125 11:55:52.074152 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45q5t\" (UniqueName: \"kubernetes.io/projected/fb5e4015-f047-4386-b88d-b7b0c2a0878b-kube-api-access-45q5t\") pod \"fb5e4015-f047-4386-b88d-b7b0c2a0878b\" (UID: \"fb5e4015-f047-4386-b88d-b7b0c2a0878b\") " Nov 25 11:55:52 crc kubenswrapper[4706]: I1125 11:55:52.074255 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb5e4015-f047-4386-b88d-b7b0c2a0878b-combined-ca-bundle\") pod \"fb5e4015-f047-4386-b88d-b7b0c2a0878b\" (UID: \"fb5e4015-f047-4386-b88d-b7b0c2a0878b\") " Nov 25 11:55:52 crc kubenswrapper[4706]: I1125 11:55:52.074335 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb5e4015-f047-4386-b88d-b7b0c2a0878b-config-data\") pod \"fb5e4015-f047-4386-b88d-b7b0c2a0878b\" (UID: \"fb5e4015-f047-4386-b88d-b7b0c2a0878b\") " Nov 25 11:55:52 crc kubenswrapper[4706]: I1125 11:55:52.074356 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fb5e4015-f047-4386-b88d-b7b0c2a0878b-fernet-keys\") pod \"fb5e4015-f047-4386-b88d-b7b0c2a0878b\" (UID: \"fb5e4015-f047-4386-b88d-b7b0c2a0878b\") " Nov 25 11:55:52 crc kubenswrapper[4706]: I1125 11:55:52.082282 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb5e4015-f047-4386-b88d-b7b0c2a0878b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "fb5e4015-f047-4386-b88d-b7b0c2a0878b" (UID: "fb5e4015-f047-4386-b88d-b7b0c2a0878b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:55:52 crc kubenswrapper[4706]: I1125 11:55:52.082422 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb5e4015-f047-4386-b88d-b7b0c2a0878b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "fb5e4015-f047-4386-b88d-b7b0c2a0878b" (UID: "fb5e4015-f047-4386-b88d-b7b0c2a0878b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:55:52 crc kubenswrapper[4706]: I1125 11:55:52.085036 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb5e4015-f047-4386-b88d-b7b0c2a0878b-scripts" (OuterVolumeSpecName: "scripts") pod "fb5e4015-f047-4386-b88d-b7b0c2a0878b" (UID: "fb5e4015-f047-4386-b88d-b7b0c2a0878b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:55:52 crc kubenswrapper[4706]: I1125 11:55:52.087209 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb5e4015-f047-4386-b88d-b7b0c2a0878b-kube-api-access-45q5t" (OuterVolumeSpecName: "kube-api-access-45q5t") pod "fb5e4015-f047-4386-b88d-b7b0c2a0878b" (UID: "fb5e4015-f047-4386-b88d-b7b0c2a0878b"). InnerVolumeSpecName "kube-api-access-45q5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:55:52 crc kubenswrapper[4706]: I1125 11:55:52.113207 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb5e4015-f047-4386-b88d-b7b0c2a0878b-config-data" (OuterVolumeSpecName: "config-data") pod "fb5e4015-f047-4386-b88d-b7b0c2a0878b" (UID: "fb5e4015-f047-4386-b88d-b7b0c2a0878b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:55:52 crc kubenswrapper[4706]: I1125 11:55:52.117410 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb5e4015-f047-4386-b88d-b7b0c2a0878b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb5e4015-f047-4386-b88d-b7b0c2a0878b" (UID: "fb5e4015-f047-4386-b88d-b7b0c2a0878b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:55:52 crc kubenswrapper[4706]: I1125 11:55:52.176657 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb5e4015-f047-4386-b88d-b7b0c2a0878b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:52 crc kubenswrapper[4706]: I1125 11:55:52.176692 4706 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fb5e4015-f047-4386-b88d-b7b0c2a0878b-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:52 crc kubenswrapper[4706]: I1125 11:55:52.176705 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb5e4015-f047-4386-b88d-b7b0c2a0878b-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:52 crc kubenswrapper[4706]: I1125 11:55:52.176716 4706 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb5e4015-f047-4386-b88d-b7b0c2a0878b-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:52 crc kubenswrapper[4706]: I1125 11:55:52.176725 4706 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fb5e4015-f047-4386-b88d-b7b0c2a0878b-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:52 crc kubenswrapper[4706]: I1125 11:55:52.176735 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45q5t\" (UniqueName: \"kubernetes.io/projected/fb5e4015-f047-4386-b88d-b7b0c2a0878b-kube-api-access-45q5t\") on node \"crc\" DevicePath \"\"" Nov 25 11:55:52 crc kubenswrapper[4706]: I1125 11:55:52.203849 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061","Type":"ContainerStarted","Data":"17bca5f4621e790c876f25b6b06e9d34c1d484ba94c24a5bea74a3ef46019532"} Nov 25 11:55:52 crc kubenswrapper[4706]: I1125 11:55:52.203935 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e0c3d5f1-1ac9-4f5f-bef2-232cf6055061" containerName="glance-log" containerID="cri-o://8335a64cf57e1eae1c577f9abff5ccb7be9057b998f1067a70800eef4c087ea6" gracePeriod=30 Nov 25 11:55:52 crc kubenswrapper[4706]: I1125 11:55:52.204107 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e0c3d5f1-1ac9-4f5f-bef2-232cf6055061" containerName="glance-httpd" containerID="cri-o://17bca5f4621e790c876f25b6b06e9d34c1d484ba94c24a5bea74a3ef46019532" gracePeriod=30 Nov 25 11:55:52 crc kubenswrapper[4706]: I1125 11:55:52.206853 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lslv5" event={"ID":"fb5e4015-f047-4386-b88d-b7b0c2a0878b","Type":"ContainerDied","Data":"820e27ba7dbcc0cabd2ea6ee2f57e63debedbcd24a32b1f182025628f2991753"} Nov 25 11:55:52 crc kubenswrapper[4706]: I1125 11:55:52.206891 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="820e27ba7dbcc0cabd2ea6ee2f57e63debedbcd24a32b1f182025628f2991753" Nov 25 11:55:52 crc kubenswrapper[4706]: I1125 11:55:52.206940 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lslv5" Nov 25 11:55:52 crc kubenswrapper[4706]: I1125 11:55:52.232238 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=17.232223071 podStartE2EDuration="17.232223071s" podCreationTimestamp="2025-11-25 11:55:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:55:52.226656331 +0000 UTC m=+1161.141213732" watchObservedRunningTime="2025-11-25 11:55:52.232223071 +0000 UTC m=+1161.146780452" Nov 25 11:55:53 crc kubenswrapper[4706]: I1125 11:55:53.108909 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-lslv5"] Nov 25 11:55:53 crc kubenswrapper[4706]: I1125 11:55:53.118671 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-lslv5"] Nov 25 11:55:53 crc kubenswrapper[4706]: I1125 11:55:53.213909 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-xbn9h"] Nov 25 11:55:53 crc kubenswrapper[4706]: E1125 11:55:53.214581 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb5e4015-f047-4386-b88d-b7b0c2a0878b" containerName="keystone-bootstrap" Nov 25 11:55:53 crc kubenswrapper[4706]: I1125 11:55:53.214594 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb5e4015-f047-4386-b88d-b7b0c2a0878b" containerName="keystone-bootstrap" Nov 25 11:55:53 crc kubenswrapper[4706]: I1125 11:55:53.214791 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb5e4015-f047-4386-b88d-b7b0c2a0878b" containerName="keystone-bootstrap" Nov 25 11:55:53 crc kubenswrapper[4706]: I1125 11:55:53.215439 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xbn9h" Nov 25 11:55:53 crc kubenswrapper[4706]: I1125 11:55:53.218657 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 11:55:53 crc kubenswrapper[4706]: I1125 11:55:53.218940 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 11:55:53 crc kubenswrapper[4706]: I1125 11:55:53.219117 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-p74gc" Nov 25 11:55:53 crc kubenswrapper[4706]: I1125 11:55:53.219230 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 11:55:53 crc kubenswrapper[4706]: I1125 11:55:53.221446 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xbn9h"] Nov 25 11:55:53 crc kubenswrapper[4706]: I1125 11:55:53.223033 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 25 11:55:53 crc kubenswrapper[4706]: I1125 11:55:53.223494 4706 generic.go:334] "Generic (PLEG): container finished" podID="e0c3d5f1-1ac9-4f5f-bef2-232cf6055061" containerID="17bca5f4621e790c876f25b6b06e9d34c1d484ba94c24a5bea74a3ef46019532" exitCode=143 Nov 25 11:55:53 crc kubenswrapper[4706]: I1125 11:55:53.223524 4706 generic.go:334] "Generic (PLEG): container finished" podID="e0c3d5f1-1ac9-4f5f-bef2-232cf6055061" containerID="8335a64cf57e1eae1c577f9abff5ccb7be9057b998f1067a70800eef4c087ea6" exitCode=143 Nov 25 11:55:53 crc kubenswrapper[4706]: I1125 11:55:53.223537 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061","Type":"ContainerDied","Data":"17bca5f4621e790c876f25b6b06e9d34c1d484ba94c24a5bea74a3ef46019532"} Nov 25 11:55:53 crc kubenswrapper[4706]: I1125 11:55:53.223580 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061","Type":"ContainerDied","Data":"8335a64cf57e1eae1c577f9abff5ccb7be9057b998f1067a70800eef4c087ea6"} Nov 25 11:55:53 crc kubenswrapper[4706]: I1125 11:55:53.300862 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4586fb7b-8269-4dca-87d4-f3c66518b999-fernet-keys\") pod \"keystone-bootstrap-xbn9h\" (UID: \"4586fb7b-8269-4dca-87d4-f3c66518b999\") " pod="openstack/keystone-bootstrap-xbn9h" Nov 25 11:55:53 crc kubenswrapper[4706]: I1125 11:55:53.300930 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4586fb7b-8269-4dca-87d4-f3c66518b999-config-data\") pod \"keystone-bootstrap-xbn9h\" (UID: \"4586fb7b-8269-4dca-87d4-f3c66518b999\") " pod="openstack/keystone-bootstrap-xbn9h" Nov 25 11:55:53 crc kubenswrapper[4706]: I1125 11:55:53.301018 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4586fb7b-8269-4dca-87d4-f3c66518b999-combined-ca-bundle\") pod \"keystone-bootstrap-xbn9h\" (UID: \"4586fb7b-8269-4dca-87d4-f3c66518b999\") " pod="openstack/keystone-bootstrap-xbn9h" Nov 25 11:55:53 crc kubenswrapper[4706]: I1125 11:55:53.301136 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56zs6\" (UniqueName: \"kubernetes.io/projected/4586fb7b-8269-4dca-87d4-f3c66518b999-kube-api-access-56zs6\") pod \"keystone-bootstrap-xbn9h\" (UID: \"4586fb7b-8269-4dca-87d4-f3c66518b999\") " pod="openstack/keystone-bootstrap-xbn9h" Nov 25 11:55:53 crc kubenswrapper[4706]: I1125 11:55:53.301190 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4586fb7b-8269-4dca-87d4-f3c66518b999-scripts\") pod \"keystone-bootstrap-xbn9h\" (UID: \"4586fb7b-8269-4dca-87d4-f3c66518b999\") " pod="openstack/keystone-bootstrap-xbn9h" Nov 25 11:55:53 crc kubenswrapper[4706]: I1125 11:55:53.301330 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4586fb7b-8269-4dca-87d4-f3c66518b999-credential-keys\") pod \"keystone-bootstrap-xbn9h\" (UID: \"4586fb7b-8269-4dca-87d4-f3c66518b999\") " pod="openstack/keystone-bootstrap-xbn9h" Nov 25 11:55:53 crc kubenswrapper[4706]: I1125 11:55:53.403423 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4586fb7b-8269-4dca-87d4-f3c66518b999-fernet-keys\") pod \"keystone-bootstrap-xbn9h\" (UID: \"4586fb7b-8269-4dca-87d4-f3c66518b999\") " pod="openstack/keystone-bootstrap-xbn9h" Nov 25 11:55:53 crc kubenswrapper[4706]: I1125 11:55:53.403494 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4586fb7b-8269-4dca-87d4-f3c66518b999-config-data\") pod \"keystone-bootstrap-xbn9h\" (UID: \"4586fb7b-8269-4dca-87d4-f3c66518b999\") " pod="openstack/keystone-bootstrap-xbn9h" Nov 25 11:55:53 crc kubenswrapper[4706]: I1125 11:55:53.403543 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4586fb7b-8269-4dca-87d4-f3c66518b999-combined-ca-bundle\") pod \"keystone-bootstrap-xbn9h\" (UID: \"4586fb7b-8269-4dca-87d4-f3c66518b999\") " pod="openstack/keystone-bootstrap-xbn9h" Nov 25 11:55:53 crc kubenswrapper[4706]: I1125 11:55:53.403573 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56zs6\" (UniqueName: \"kubernetes.io/projected/4586fb7b-8269-4dca-87d4-f3c66518b999-kube-api-access-56zs6\") pod \"keystone-bootstrap-xbn9h\" (UID: \"4586fb7b-8269-4dca-87d4-f3c66518b999\") " pod="openstack/keystone-bootstrap-xbn9h" Nov 25 11:55:53 crc kubenswrapper[4706]: I1125 11:55:53.403595 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4586fb7b-8269-4dca-87d4-f3c66518b999-scripts\") pod \"keystone-bootstrap-xbn9h\" (UID: \"4586fb7b-8269-4dca-87d4-f3c66518b999\") " pod="openstack/keystone-bootstrap-xbn9h" Nov 25 11:55:53 crc kubenswrapper[4706]: I1125 11:55:53.403632 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4586fb7b-8269-4dca-87d4-f3c66518b999-credential-keys\") pod \"keystone-bootstrap-xbn9h\" (UID: \"4586fb7b-8269-4dca-87d4-f3c66518b999\") " pod="openstack/keystone-bootstrap-xbn9h" Nov 25 11:55:53 crc kubenswrapper[4706]: I1125 11:55:53.410143 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4586fb7b-8269-4dca-87d4-f3c66518b999-config-data\") pod \"keystone-bootstrap-xbn9h\" (UID: \"4586fb7b-8269-4dca-87d4-f3c66518b999\") " pod="openstack/keystone-bootstrap-xbn9h" Nov 25 11:55:53 crc kubenswrapper[4706]: I1125 11:55:53.410625 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4586fb7b-8269-4dca-87d4-f3c66518b999-scripts\") pod \"keystone-bootstrap-xbn9h\" (UID: \"4586fb7b-8269-4dca-87d4-f3c66518b999\") " pod="openstack/keystone-bootstrap-xbn9h" Nov 25 11:55:53 crc kubenswrapper[4706]: I1125 11:55:53.411569 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4586fb7b-8269-4dca-87d4-f3c66518b999-fernet-keys\") pod \"keystone-bootstrap-xbn9h\" (UID: \"4586fb7b-8269-4dca-87d4-f3c66518b999\") " pod="openstack/keystone-bootstrap-xbn9h" Nov 25 11:55:53 crc kubenswrapper[4706]: I1125 11:55:53.412966 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4586fb7b-8269-4dca-87d4-f3c66518b999-combined-ca-bundle\") pod \"keystone-bootstrap-xbn9h\" (UID: \"4586fb7b-8269-4dca-87d4-f3c66518b999\") " pod="openstack/keystone-bootstrap-xbn9h" Nov 25 11:55:53 crc kubenswrapper[4706]: I1125 11:55:53.419129 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4586fb7b-8269-4dca-87d4-f3c66518b999-credential-keys\") pod \"keystone-bootstrap-xbn9h\" (UID: \"4586fb7b-8269-4dca-87d4-f3c66518b999\") " pod="openstack/keystone-bootstrap-xbn9h" Nov 25 11:55:53 crc kubenswrapper[4706]: I1125 11:55:53.423723 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56zs6\" (UniqueName: \"kubernetes.io/projected/4586fb7b-8269-4dca-87d4-f3c66518b999-kube-api-access-56zs6\") pod \"keystone-bootstrap-xbn9h\" (UID: \"4586fb7b-8269-4dca-87d4-f3c66518b999\") " pod="openstack/keystone-bootstrap-xbn9h" Nov 25 11:55:53 crc kubenswrapper[4706]: I1125 11:55:53.539974 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xbn9h" Nov 25 11:55:53 crc kubenswrapper[4706]: I1125 11:55:53.935913 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb5e4015-f047-4386-b88d-b7b0c2a0878b" path="/var/lib/kubelet/pods/fb5e4015-f047-4386-b88d-b7b0c2a0878b/volumes" Nov 25 11:55:54 crc kubenswrapper[4706]: I1125 11:55:54.416899 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-dqgdx" podUID="d377cf62-3246-4d83-86b8-f55d354a2d5c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: connect: connection refused" Nov 25 11:55:54 crc kubenswrapper[4706]: E1125 11:55:54.730256 4706 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 25 11:55:54 crc kubenswrapper[4706]: E1125 11:55:54.730698 4706 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n9hf8h97h595h5dfh655h68h598h57h55dhd6h5bh55fh655h588h64ch59bh55hf6h5b4h569h6bh64ch5d8hb5h56h54fh5c4h5bdhc6h548h5q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p75q7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-78549bf5d5-rtlzb_openstack(cba2657d-39a9-4556-abec-412b63df6c94): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 11:55:54 crc kubenswrapper[4706]: E1125 11:55:54.733656 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-78549bf5d5-rtlzb" podUID="cba2657d-39a9-4556-abec-412b63df6c94" Nov 25 11:56:01 crc kubenswrapper[4706]: I1125 11:56:01.124681 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 11:56:01 crc kubenswrapper[4706]: I1125 11:56:01.125224 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 11:56:03 crc kubenswrapper[4706]: I1125 11:56:03.323379 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dc6d1720-c37f-4501-bbb1-16f507bc1126","Type":"ContainerDied","Data":"1d30d85110ff376a33d87db7563e5684aada2ba8c86fb21b726f8b4c86c10b00"} Nov 25 11:56:03 crc kubenswrapper[4706]: I1125 11:56:03.323689 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d30d85110ff376a33d87db7563e5684aada2ba8c86fb21b726f8b4c86c10b00" Nov 25 11:56:03 crc kubenswrapper[4706]: I1125 11:56:03.339081 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 11:56:03 crc kubenswrapper[4706]: I1125 11:56:03.397824 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc6d1720-c37f-4501-bbb1-16f507bc1126-httpd-run\") pod \"dc6d1720-c37f-4501-bbb1-16f507bc1126\" (UID: \"dc6d1720-c37f-4501-bbb1-16f507bc1126\") " Nov 25 11:56:03 crc kubenswrapper[4706]: I1125 11:56:03.397976 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6d1720-c37f-4501-bbb1-16f507bc1126-combined-ca-bundle\") pod \"dc6d1720-c37f-4501-bbb1-16f507bc1126\" (UID: \"dc6d1720-c37f-4501-bbb1-16f507bc1126\") " Nov 25 11:56:03 crc kubenswrapper[4706]: I1125 11:56:03.398028 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"dc6d1720-c37f-4501-bbb1-16f507bc1126\" (UID: \"dc6d1720-c37f-4501-bbb1-16f507bc1126\") " Nov 25 11:56:03 crc kubenswrapper[4706]: I1125 11:56:03.398053 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6d1720-c37f-4501-bbb1-16f507bc1126-config-data\") pod \"dc6d1720-c37f-4501-bbb1-16f507bc1126\" (UID: \"dc6d1720-c37f-4501-bbb1-16f507bc1126\") " Nov 25 11:56:03 crc kubenswrapper[4706]: I1125 11:56:03.398137 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bczs\" (UniqueName: \"kubernetes.io/projected/dc6d1720-c37f-4501-bbb1-16f507bc1126-kube-api-access-9bczs\") pod \"dc6d1720-c37f-4501-bbb1-16f507bc1126\" (UID: \"dc6d1720-c37f-4501-bbb1-16f507bc1126\") " Nov 25 11:56:03 crc kubenswrapper[4706]: I1125 11:56:03.398164 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6d1720-c37f-4501-bbb1-16f507bc1126-internal-tls-certs\") pod \"dc6d1720-c37f-4501-bbb1-16f507bc1126\" (UID: \"dc6d1720-c37f-4501-bbb1-16f507bc1126\") " Nov 25 11:56:03 crc kubenswrapper[4706]: I1125 11:56:03.398231 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc6d1720-c37f-4501-bbb1-16f507bc1126-scripts\") pod \"dc6d1720-c37f-4501-bbb1-16f507bc1126\" (UID: \"dc6d1720-c37f-4501-bbb1-16f507bc1126\") " Nov 25 11:56:03 crc kubenswrapper[4706]: I1125 11:56:03.398266 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc6d1720-c37f-4501-bbb1-16f507bc1126-logs\") pod \"dc6d1720-c37f-4501-bbb1-16f507bc1126\" (UID: \"dc6d1720-c37f-4501-bbb1-16f507bc1126\") " Nov 25 11:56:03 crc kubenswrapper[4706]: I1125 11:56:03.398840 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc6d1720-c37f-4501-bbb1-16f507bc1126-logs" (OuterVolumeSpecName: "logs") pod "dc6d1720-c37f-4501-bbb1-16f507bc1126" (UID: "dc6d1720-c37f-4501-bbb1-16f507bc1126"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:56:03 crc kubenswrapper[4706]: I1125 11:56:03.399083 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc6d1720-c37f-4501-bbb1-16f507bc1126-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "dc6d1720-c37f-4501-bbb1-16f507bc1126" (UID: "dc6d1720-c37f-4501-bbb1-16f507bc1126"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:56:03 crc kubenswrapper[4706]: I1125 11:56:03.406817 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc6d1720-c37f-4501-bbb1-16f507bc1126-kube-api-access-9bczs" (OuterVolumeSpecName: "kube-api-access-9bczs") pod "dc6d1720-c37f-4501-bbb1-16f507bc1126" (UID: "dc6d1720-c37f-4501-bbb1-16f507bc1126"). InnerVolumeSpecName "kube-api-access-9bczs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:56:03 crc kubenswrapper[4706]: I1125 11:56:03.420761 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc6d1720-c37f-4501-bbb1-16f507bc1126-scripts" (OuterVolumeSpecName: "scripts") pod "dc6d1720-c37f-4501-bbb1-16f507bc1126" (UID: "dc6d1720-c37f-4501-bbb1-16f507bc1126"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:03 crc kubenswrapper[4706]: I1125 11:56:03.424237 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc6d1720-c37f-4501-bbb1-16f507bc1126-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc6d1720-c37f-4501-bbb1-16f507bc1126" (UID: "dc6d1720-c37f-4501-bbb1-16f507bc1126"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:03 crc kubenswrapper[4706]: I1125 11:56:03.429439 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "dc6d1720-c37f-4501-bbb1-16f507bc1126" (UID: "dc6d1720-c37f-4501-bbb1-16f507bc1126"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 11:56:03 crc kubenswrapper[4706]: I1125 11:56:03.444582 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc6d1720-c37f-4501-bbb1-16f507bc1126-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "dc6d1720-c37f-4501-bbb1-16f507bc1126" (UID: "dc6d1720-c37f-4501-bbb1-16f507bc1126"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:03 crc kubenswrapper[4706]: I1125 11:56:03.462597 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc6d1720-c37f-4501-bbb1-16f507bc1126-config-data" (OuterVolumeSpecName: "config-data") pod "dc6d1720-c37f-4501-bbb1-16f507bc1126" (UID: "dc6d1720-c37f-4501-bbb1-16f507bc1126"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:03 crc kubenswrapper[4706]: I1125 11:56:03.500585 4706 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6d1720-c37f-4501-bbb1-16f507bc1126-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:03 crc kubenswrapper[4706]: I1125 11:56:03.500619 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bczs\" (UniqueName: \"kubernetes.io/projected/dc6d1720-c37f-4501-bbb1-16f507bc1126-kube-api-access-9bczs\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:03 crc kubenswrapper[4706]: I1125 11:56:03.500634 4706 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc6d1720-c37f-4501-bbb1-16f507bc1126-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:03 crc kubenswrapper[4706]: I1125 11:56:03.500645 4706 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc6d1720-c37f-4501-bbb1-16f507bc1126-logs\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:03 crc kubenswrapper[4706]: I1125 11:56:03.500656 4706 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc6d1720-c37f-4501-bbb1-16f507bc1126-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:03 crc kubenswrapper[4706]: I1125 11:56:03.500666 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6d1720-c37f-4501-bbb1-16f507bc1126-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:03 crc kubenswrapper[4706]: I1125 11:56:03.500705 4706 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Nov 25 11:56:03 crc kubenswrapper[4706]: I1125 11:56:03.500716 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6d1720-c37f-4501-bbb1-16f507bc1126-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:03 crc kubenswrapper[4706]: I1125 11:56:03.548822 4706 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Nov 25 11:56:03 crc kubenswrapper[4706]: I1125 11:56:03.603331 4706 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:03 crc kubenswrapper[4706]: I1125 11:56:03.992995 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78549bf5d5-rtlzb" Nov 25 11:56:04 crc kubenswrapper[4706]: E1125 11:56:04.006711 4706 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Nov 25 11:56:04 crc kubenswrapper[4706]: E1125 11:56:04.007268 4706 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7zgll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-v6lvb_openstack(08ef6ec0-ba09-40a2-94d0-a1ddbba8644a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 11:56:04 crc kubenswrapper[4706]: E1125 11:56:04.016532 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-v6lvb" podUID="08ef6ec0-ba09-40a2-94d0-a1ddbba8644a" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.112312 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p75q7\" (UniqueName: \"kubernetes.io/projected/cba2657d-39a9-4556-abec-412b63df6c94-kube-api-access-p75q7\") pod \"cba2657d-39a9-4556-abec-412b63df6c94\" (UID: \"cba2657d-39a9-4556-abec-412b63df6c94\") " Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.112378 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cba2657d-39a9-4556-abec-412b63df6c94-horizon-secret-key\") pod \"cba2657d-39a9-4556-abec-412b63df6c94\" (UID: \"cba2657d-39a9-4556-abec-412b63df6c94\") " Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.112401 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cba2657d-39a9-4556-abec-412b63df6c94-config-data\") pod \"cba2657d-39a9-4556-abec-412b63df6c94\" (UID: \"cba2657d-39a9-4556-abec-412b63df6c94\") " Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.112481 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cba2657d-39a9-4556-abec-412b63df6c94-logs\") pod \"cba2657d-39a9-4556-abec-412b63df6c94\" (UID: \"cba2657d-39a9-4556-abec-412b63df6c94\") " Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.112511 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cba2657d-39a9-4556-abec-412b63df6c94-scripts\") pod \"cba2657d-39a9-4556-abec-412b63df6c94\" (UID: \"cba2657d-39a9-4556-abec-412b63df6c94\") " Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.112863 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cba2657d-39a9-4556-abec-412b63df6c94-logs" (OuterVolumeSpecName: "logs") pod "cba2657d-39a9-4556-abec-412b63df6c94" (UID: "cba2657d-39a9-4556-abec-412b63df6c94"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.113285 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cba2657d-39a9-4556-abec-412b63df6c94-scripts" (OuterVolumeSpecName: "scripts") pod "cba2657d-39a9-4556-abec-412b63df6c94" (UID: "cba2657d-39a9-4556-abec-412b63df6c94"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.113390 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cba2657d-39a9-4556-abec-412b63df6c94-config-data" (OuterVolumeSpecName: "config-data") pod "cba2657d-39a9-4556-abec-412b63df6c94" (UID: "cba2657d-39a9-4556-abec-412b63df6c94"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.116579 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cba2657d-39a9-4556-abec-412b63df6c94-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "cba2657d-39a9-4556-abec-412b63df6c94" (UID: "cba2657d-39a9-4556-abec-412b63df6c94"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.117163 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cba2657d-39a9-4556-abec-412b63df6c94-kube-api-access-p75q7" (OuterVolumeSpecName: "kube-api-access-p75q7") pod "cba2657d-39a9-4556-abec-412b63df6c94" (UID: "cba2657d-39a9-4556-abec-412b63df6c94"). InnerVolumeSpecName "kube-api-access-p75q7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.215568 4706 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cba2657d-39a9-4556-abec-412b63df6c94-logs\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.215639 4706 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cba2657d-39a9-4556-abec-412b63df6c94-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.215660 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p75q7\" (UniqueName: \"kubernetes.io/projected/cba2657d-39a9-4556-abec-412b63df6c94-kube-api-access-p75q7\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.215681 4706 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cba2657d-39a9-4556-abec-412b63df6c94-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.215701 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cba2657d-39a9-4556-abec-412b63df6c94-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.333176 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78549bf5d5-rtlzb" event={"ID":"cba2657d-39a9-4556-abec-412b63df6c94","Type":"ContainerDied","Data":"0e072bd87363cee7e314cea252164ba36abb583cb5b5e935658cf624e7bbe94f"} Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.333239 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.333288 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78549bf5d5-rtlzb" Nov 25 11:56:04 crc kubenswrapper[4706]: E1125 11:56:04.336656 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-v6lvb" podUID="08ef6ec0-ba09-40a2-94d0-a1ddbba8644a" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.396280 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-78549bf5d5-rtlzb"] Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.408322 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-78549bf5d5-rtlzb"] Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.417612 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-dqgdx" podUID="d377cf62-3246-4d83-86b8-f55d354a2d5c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.418276 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-dqgdx" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.425404 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.435355 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.447055 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 11:56:04 crc kubenswrapper[4706]: E1125 11:56:04.447577 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc6d1720-c37f-4501-bbb1-16f507bc1126" containerName="glance-log" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.447596 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc6d1720-c37f-4501-bbb1-16f507bc1126" containerName="glance-log" Nov 25 11:56:04 crc kubenswrapper[4706]: E1125 11:56:04.447615 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc6d1720-c37f-4501-bbb1-16f507bc1126" containerName="glance-httpd" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.447627 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc6d1720-c37f-4501-bbb1-16f507bc1126" containerName="glance-httpd" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.447847 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc6d1720-c37f-4501-bbb1-16f507bc1126" containerName="glance-log" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.447877 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc6d1720-c37f-4501-bbb1-16f507bc1126" containerName="glance-httpd" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.449129 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.452735 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.452956 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.475516 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.520864 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9392449e-c392-4d77-b36a-67b6d8c716c7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9392449e-c392-4d77-b36a-67b6d8c716c7\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.520913 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9392449e-c392-4d77-b36a-67b6d8c716c7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9392449e-c392-4d77-b36a-67b6d8c716c7\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.521011 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9392449e-c392-4d77-b36a-67b6d8c716c7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9392449e-c392-4d77-b36a-67b6d8c716c7\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.521040 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9392449e-c392-4d77-b36a-67b6d8c716c7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9392449e-c392-4d77-b36a-67b6d8c716c7\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.521070 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9392449e-c392-4d77-b36a-67b6d8c716c7-logs\") pod \"glance-default-internal-api-0\" (UID: \"9392449e-c392-4d77-b36a-67b6d8c716c7\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.521135 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rn8k\" (UniqueName: \"kubernetes.io/projected/9392449e-c392-4d77-b36a-67b6d8c716c7-kube-api-access-7rn8k\") pod \"glance-default-internal-api-0\" (UID: \"9392449e-c392-4d77-b36a-67b6d8c716c7\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.521193 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"9392449e-c392-4d77-b36a-67b6d8c716c7\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.521266 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9392449e-c392-4d77-b36a-67b6d8c716c7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9392449e-c392-4d77-b36a-67b6d8c716c7\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:56:04 crc kubenswrapper[4706]: E1125 11:56:04.548727 4706 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Nov 25 11:56:04 crc kubenswrapper[4706]: E1125 11:56:04.548920 4706 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n64dh694h67h598hbh5d9h569h65hcbh67dh67dh588h5b4hc8h68h599hdbh8ch548h545h5b5hc5h674h5d4hf7h5f8h597h564h5f9h5bbh5b4h56q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fsmhl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(db4e7aed-28ec-49cd-8f0b-e01df112bf54): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.630513 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9392449e-c392-4d77-b36a-67b6d8c716c7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9392449e-c392-4d77-b36a-67b6d8c716c7\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.630793 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9392449e-c392-4d77-b36a-67b6d8c716c7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9392449e-c392-4d77-b36a-67b6d8c716c7\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.630835 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9392449e-c392-4d77-b36a-67b6d8c716c7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9392449e-c392-4d77-b36a-67b6d8c716c7\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.630985 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9392449e-c392-4d77-b36a-67b6d8c716c7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9392449e-c392-4d77-b36a-67b6d8c716c7\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.631002 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9392449e-c392-4d77-b36a-67b6d8c716c7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9392449e-c392-4d77-b36a-67b6d8c716c7\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.631036 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9392449e-c392-4d77-b36a-67b6d8c716c7-logs\") pod \"glance-default-internal-api-0\" (UID: \"9392449e-c392-4d77-b36a-67b6d8c716c7\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.631140 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rn8k\" (UniqueName: \"kubernetes.io/projected/9392449e-c392-4d77-b36a-67b6d8c716c7-kube-api-access-7rn8k\") pod \"glance-default-internal-api-0\" (UID: \"9392449e-c392-4d77-b36a-67b6d8c716c7\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.631191 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"9392449e-c392-4d77-b36a-67b6d8c716c7\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.631231 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9392449e-c392-4d77-b36a-67b6d8c716c7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9392449e-c392-4d77-b36a-67b6d8c716c7\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.633317 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9392449e-c392-4d77-b36a-67b6d8c716c7-logs\") pod \"glance-default-internal-api-0\" (UID: \"9392449e-c392-4d77-b36a-67b6d8c716c7\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.633933 4706 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"9392449e-c392-4d77-b36a-67b6d8c716c7\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.640690 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9392449e-c392-4d77-b36a-67b6d8c716c7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9392449e-c392-4d77-b36a-67b6d8c716c7\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.641534 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-dqgdx" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.644652 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9392449e-c392-4d77-b36a-67b6d8c716c7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9392449e-c392-4d77-b36a-67b6d8c716c7\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.645551 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9392449e-c392-4d77-b36a-67b6d8c716c7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9392449e-c392-4d77-b36a-67b6d8c716c7\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.645572 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9392449e-c392-4d77-b36a-67b6d8c716c7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9392449e-c392-4d77-b36a-67b6d8c716c7\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.658021 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rn8k\" (UniqueName: \"kubernetes.io/projected/9392449e-c392-4d77-b36a-67b6d8c716c7-kube-api-access-7rn8k\") pod \"glance-default-internal-api-0\" (UID: \"9392449e-c392-4d77-b36a-67b6d8c716c7\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.710594 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"9392449e-c392-4d77-b36a-67b6d8c716c7\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.731935 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d377cf62-3246-4d83-86b8-f55d354a2d5c-config\") pod \"d377cf62-3246-4d83-86b8-f55d354a2d5c\" (UID: \"d377cf62-3246-4d83-86b8-f55d354a2d5c\") " Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.732059 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d377cf62-3246-4d83-86b8-f55d354a2d5c-dns-swift-storage-0\") pod \"d377cf62-3246-4d83-86b8-f55d354a2d5c\" (UID: \"d377cf62-3246-4d83-86b8-f55d354a2d5c\") " Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.732145 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d377cf62-3246-4d83-86b8-f55d354a2d5c-ovsdbserver-nb\") pod \"d377cf62-3246-4d83-86b8-f55d354a2d5c\" (UID: \"d377cf62-3246-4d83-86b8-f55d354a2d5c\") " Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.732273 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d377cf62-3246-4d83-86b8-f55d354a2d5c-dns-svc\") pod \"d377cf62-3246-4d83-86b8-f55d354a2d5c\" (UID: \"d377cf62-3246-4d83-86b8-f55d354a2d5c\") " Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.732396 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d377cf62-3246-4d83-86b8-f55d354a2d5c-ovsdbserver-sb\") pod \"d377cf62-3246-4d83-86b8-f55d354a2d5c\" (UID: \"d377cf62-3246-4d83-86b8-f55d354a2d5c\") " Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.732476 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssvsk\" (UniqueName: \"kubernetes.io/projected/d377cf62-3246-4d83-86b8-f55d354a2d5c-kube-api-access-ssvsk\") pod \"d377cf62-3246-4d83-86b8-f55d354a2d5c\" (UID: \"d377cf62-3246-4d83-86b8-f55d354a2d5c\") " Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.737794 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d377cf62-3246-4d83-86b8-f55d354a2d5c-kube-api-access-ssvsk" (OuterVolumeSpecName: "kube-api-access-ssvsk") pod "d377cf62-3246-4d83-86b8-f55d354a2d5c" (UID: "d377cf62-3246-4d83-86b8-f55d354a2d5c"). InnerVolumeSpecName "kube-api-access-ssvsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.765986 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.784513 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d377cf62-3246-4d83-86b8-f55d354a2d5c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d377cf62-3246-4d83-86b8-f55d354a2d5c" (UID: "d377cf62-3246-4d83-86b8-f55d354a2d5c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.788401 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d377cf62-3246-4d83-86b8-f55d354a2d5c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d377cf62-3246-4d83-86b8-f55d354a2d5c" (UID: "d377cf62-3246-4d83-86b8-f55d354a2d5c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.792656 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d377cf62-3246-4d83-86b8-f55d354a2d5c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d377cf62-3246-4d83-86b8-f55d354a2d5c" (UID: "d377cf62-3246-4d83-86b8-f55d354a2d5c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:56:04 crc kubenswrapper[4706]: E1125 11:56:04.799805 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d377cf62-3246-4d83-86b8-f55d354a2d5c-config podName:d377cf62-3246-4d83-86b8-f55d354a2d5c nodeName:}" failed. No retries permitted until 2025-11-25 11:56:05.299774849 +0000 UTC m=+1174.214332230 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config" (UniqueName: "kubernetes.io/configmap/d377cf62-3246-4d83-86b8-f55d354a2d5c-config") pod "d377cf62-3246-4d83-86b8-f55d354a2d5c" (UID: "d377cf62-3246-4d83-86b8-f55d354a2d5c") : error deleting /var/lib/kubelet/pods/d377cf62-3246-4d83-86b8-f55d354a2d5c/volume-subpaths: remove /var/lib/kubelet/pods/d377cf62-3246-4d83-86b8-f55d354a2d5c/volume-subpaths: no such file or directory Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.800098 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d377cf62-3246-4d83-86b8-f55d354a2d5c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d377cf62-3246-4d83-86b8-f55d354a2d5c" (UID: "d377cf62-3246-4d83-86b8-f55d354a2d5c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.834673 4706 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d377cf62-3246-4d83-86b8-f55d354a2d5c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.834700 4706 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d377cf62-3246-4d83-86b8-f55d354a2d5c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.834709 4706 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d377cf62-3246-4d83-86b8-f55d354a2d5c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.834720 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssvsk\" (UniqueName: \"kubernetes.io/projected/d377cf62-3246-4d83-86b8-f55d354a2d5c-kube-api-access-ssvsk\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.834730 4706 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d377cf62-3246-4d83-86b8-f55d354a2d5c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:04 crc kubenswrapper[4706]: I1125 11:56:04.966471 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5d6465f55b-zdrth"] Nov 25 11:56:05 crc kubenswrapper[4706]: I1125 11:56:05.342462 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d377cf62-3246-4d83-86b8-f55d354a2d5c-config\") pod \"d377cf62-3246-4d83-86b8-f55d354a2d5c\" (UID: \"d377cf62-3246-4d83-86b8-f55d354a2d5c\") " Nov 25 11:56:05 crc kubenswrapper[4706]: I1125 11:56:05.342957 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d377cf62-3246-4d83-86b8-f55d354a2d5c-config" (OuterVolumeSpecName: "config") pod "d377cf62-3246-4d83-86b8-f55d354a2d5c" (UID: "d377cf62-3246-4d83-86b8-f55d354a2d5c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:56:05 crc kubenswrapper[4706]: I1125 11:56:05.343511 4706 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d377cf62-3246-4d83-86b8-f55d354a2d5c-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:05 crc kubenswrapper[4706]: I1125 11:56:05.344630 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-dqgdx" event={"ID":"d377cf62-3246-4d83-86b8-f55d354a2d5c","Type":"ContainerDied","Data":"4982eba18b74496c77af6db9130a79de0795bbbcd90eac419c2d95d3b10f1919"} Nov 25 11:56:05 crc kubenswrapper[4706]: I1125 11:56:05.344681 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-dqgdx" Nov 25 11:56:05 crc kubenswrapper[4706]: I1125 11:56:05.344689 4706 scope.go:117] "RemoveContainer" containerID="f1b3b630b5578d49173f9161e395731350d90063332754fe96cefc07384bf022" Nov 25 11:56:05 crc kubenswrapper[4706]: I1125 11:56:05.385354 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-dqgdx"] Nov 25 11:56:05 crc kubenswrapper[4706]: I1125 11:56:05.392230 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-dqgdx"] Nov 25 11:56:05 crc kubenswrapper[4706]: I1125 11:56:05.935718 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cba2657d-39a9-4556-abec-412b63df6c94" path="/var/lib/kubelet/pods/cba2657d-39a9-4556-abec-412b63df6c94/volumes" Nov 25 11:56:05 crc kubenswrapper[4706]: I1125 11:56:05.936392 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d377cf62-3246-4d83-86b8-f55d354a2d5c" path="/var/lib/kubelet/pods/d377cf62-3246-4d83-86b8-f55d354a2d5c/volumes" Nov 25 11:56:05 crc kubenswrapper[4706]: I1125 11:56:05.937477 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc6d1720-c37f-4501-bbb1-16f507bc1126" path="/var/lib/kubelet/pods/dc6d1720-c37f-4501-bbb1-16f507bc1126/volumes" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.040687 4706 scope.go:117] "RemoveContainer" containerID="c6335dfa87a6373df916c4dcc0ec12ad7ba930ded5450469edef5eb7c56e7345" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.176101 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.176454 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 11:56:06 crc kubenswrapper[4706]: E1125 11:56:06.187034 4706 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 25 11:56:06 crc kubenswrapper[4706]: E1125 11:56:06.187156 4706 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2dkcz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-fd7sf_openstack(424f303d-41b7-4fd6-be4a-017148ed95da): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 11:56:06 crc kubenswrapper[4706]: E1125 11:56:06.189055 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-fd7sf" podUID="424f303d-41b7-4fd6-be4a-017148ed95da" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.218534 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.260214 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-combined-ca-bundle\") pod \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\" (UID: \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\") " Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.260328 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-logs\") pod \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\" (UID: \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\") " Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.260423 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\" (UID: \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\") " Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.260454 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-scripts\") pod \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\" (UID: \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\") " Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.260533 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2cpz\" (UniqueName: \"kubernetes.io/projected/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-kube-api-access-g2cpz\") pod \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\" (UID: \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\") " Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.260562 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-config-data\") pod \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\" (UID: \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\") " Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.260601 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-httpd-run\") pod \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\" (UID: \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\") " Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.260678 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-public-tls-certs\") pod \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\" (UID: \"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061\") " Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.265638 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e0c3d5f1-1ac9-4f5f-bef2-232cf6055061" (UID: "e0c3d5f1-1ac9-4f5f-bef2-232cf6055061"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.265653 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-logs" (OuterVolumeSpecName: "logs") pod "e0c3d5f1-1ac9-4f5f-bef2-232cf6055061" (UID: "e0c3d5f1-1ac9-4f5f-bef2-232cf6055061"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.269632 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-scripts" (OuterVolumeSpecName: "scripts") pod "e0c3d5f1-1ac9-4f5f-bef2-232cf6055061" (UID: "e0c3d5f1-1ac9-4f5f-bef2-232cf6055061"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.270332 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "e0c3d5f1-1ac9-4f5f-bef2-232cf6055061" (UID: "e0c3d5f1-1ac9-4f5f-bef2-232cf6055061"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.276770 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-kube-api-access-g2cpz" (OuterVolumeSpecName: "kube-api-access-g2cpz") pod "e0c3d5f1-1ac9-4f5f-bef2-232cf6055061" (UID: "e0c3d5f1-1ac9-4f5f-bef2-232cf6055061"). InnerVolumeSpecName "kube-api-access-g2cpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.299686 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0c3d5f1-1ac9-4f5f-bef2-232cf6055061" (UID: "e0c3d5f1-1ac9-4f5f-bef2-232cf6055061"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.324720 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-config-data" (OuterVolumeSpecName: "config-data") pod "e0c3d5f1-1ac9-4f5f-bef2-232cf6055061" (UID: "e0c3d5f1-1ac9-4f5f-bef2-232cf6055061"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.325567 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e0c3d5f1-1ac9-4f5f-bef2-232cf6055061" (UID: "e0c3d5f1-1ac9-4f5f-bef2-232cf6055061"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.354178 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e0c3d5f1-1ac9-4f5f-bef2-232cf6055061","Type":"ContainerDied","Data":"26dcd8c8ed9f7cdb556012055cd1f066e185d065d3e7441226c346cc483a321a"} Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.354232 4706 scope.go:117] "RemoveContainer" containerID="17bca5f4621e790c876f25b6b06e9d34c1d484ba94c24a5bea74a3ef46019532" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.354357 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.362557 4706 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.362585 4706 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.362597 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2cpz\" (UniqueName: \"kubernetes.io/projected/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-kube-api-access-g2cpz\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.362607 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.362617 4706 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.362626 4706 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.362634 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.362644 4706 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061-logs\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.367995 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5d6465f55b-zdrth" event={"ID":"74b33eb1-0020-4037-918c-9e747dcfd61f","Type":"ContainerStarted","Data":"a47feaa85e40c474876dd46428ea160b0a82ec7f94cc77f9a69dd0cfe0b98dcd"} Nov 25 11:56:06 crc kubenswrapper[4706]: E1125 11:56:06.369705 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-fd7sf" podUID="424f303d-41b7-4fd6-be4a-017148ed95da" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.386610 4706 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.431879 4706 scope.go:117] "RemoveContainer" containerID="8335a64cf57e1eae1c577f9abff5ccb7be9057b998f1067a70800eef4c087ea6" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.465173 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.466069 4706 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.490086 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.510110 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 11:56:06 crc kubenswrapper[4706]: E1125 11:56:06.510496 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d377cf62-3246-4d83-86b8-f55d354a2d5c" containerName="init" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.510508 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="d377cf62-3246-4d83-86b8-f55d354a2d5c" containerName="init" Nov 25 11:56:06 crc kubenswrapper[4706]: E1125 11:56:06.510532 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0c3d5f1-1ac9-4f5f-bef2-232cf6055061" containerName="glance-httpd" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.510539 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0c3d5f1-1ac9-4f5f-bef2-232cf6055061" containerName="glance-httpd" Nov 25 11:56:06 crc kubenswrapper[4706]: E1125 11:56:06.510558 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0c3d5f1-1ac9-4f5f-bef2-232cf6055061" containerName="glance-log" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.510565 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0c3d5f1-1ac9-4f5f-bef2-232cf6055061" containerName="glance-log" Nov 25 11:56:06 crc kubenswrapper[4706]: E1125 11:56:06.510580 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d377cf62-3246-4d83-86b8-f55d354a2d5c" containerName="dnsmasq-dns" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.510586 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="d377cf62-3246-4d83-86b8-f55d354a2d5c" containerName="dnsmasq-dns" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.510730 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0c3d5f1-1ac9-4f5f-bef2-232cf6055061" containerName="glance-log" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.510746 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="d377cf62-3246-4d83-86b8-f55d354a2d5c" containerName="dnsmasq-dns" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.510757 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0c3d5f1-1ac9-4f5f-bef2-232cf6055061" containerName="glance-httpd" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.520032 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.520149 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.522838 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.523053 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.569497 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\") " pod="openstack/glance-default-external-api-0" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.569543 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\") " pod="openstack/glance-default-external-api-0" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.569588 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-config-data\") pod \"glance-default-external-api-0\" (UID: \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\") " pod="openstack/glance-default-external-api-0" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.569616 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snl6k\" (UniqueName: \"kubernetes.io/projected/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-kube-api-access-snl6k\") pod \"glance-default-external-api-0\" (UID: \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\") " pod="openstack/glance-default-external-api-0" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.569653 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-scripts\") pod \"glance-default-external-api-0\" (UID: \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\") " pod="openstack/glance-default-external-api-0" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.569705 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\") " pod="openstack/glance-default-external-api-0" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.569732 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-logs\") pod \"glance-default-external-api-0\" (UID: \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\") " pod="openstack/glance-default-external-api-0" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.569760 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\") " pod="openstack/glance-default-external-api-0" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.672222 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-scripts\") pod \"glance-default-external-api-0\" (UID: \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\") " pod="openstack/glance-default-external-api-0" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.672311 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\") " pod="openstack/glance-default-external-api-0" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.672336 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-logs\") pod \"glance-default-external-api-0\" (UID: \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\") " pod="openstack/glance-default-external-api-0" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.672358 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\") " pod="openstack/glance-default-external-api-0" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.672429 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\") " pod="openstack/glance-default-external-api-0" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.672449 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\") " pod="openstack/glance-default-external-api-0" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.672477 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-config-data\") pod \"glance-default-external-api-0\" (UID: \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\") " pod="openstack/glance-default-external-api-0" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.672496 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snl6k\" (UniqueName: \"kubernetes.io/projected/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-kube-api-access-snl6k\") pod \"glance-default-external-api-0\" (UID: \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\") " pod="openstack/glance-default-external-api-0" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.673240 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\") " pod="openstack/glance-default-external-api-0" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.673341 4706 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.673792 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-logs\") pod \"glance-default-external-api-0\" (UID: \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\") " pod="openstack/glance-default-external-api-0" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.687591 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-85664bf4f6-ws67w"] Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.691752 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-config-data\") pod \"glance-default-external-api-0\" (UID: \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\") " pod="openstack/glance-default-external-api-0" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.698036 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-scripts\") pod \"glance-default-external-api-0\" (UID: \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\") " pod="openstack/glance-default-external-api-0" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.698389 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\") " pod="openstack/glance-default-external-api-0" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.699495 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\") " pod="openstack/glance-default-external-api-0" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.702963 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snl6k\" (UniqueName: \"kubernetes.io/projected/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-kube-api-access-snl6k\") pod \"glance-default-external-api-0\" (UID: \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\") " pod="openstack/glance-default-external-api-0" Nov 25 11:56:06 crc kubenswrapper[4706]: W1125 11:56:06.706179 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66bfb4a4_e60d_4f75_ad0b_1ad3e8ff1bf5.slice/crio-12fc4f1a9cbdbc56fa3a00572a7e7a59c41e44eb9f807e3c3bdf084d1c47ed26 WatchSource:0}: Error finding container 12fc4f1a9cbdbc56fa3a00572a7e7a59c41e44eb9f807e3c3bdf084d1c47ed26: Status 404 returned error can't find the container with id 12fc4f1a9cbdbc56fa3a00572a7e7a59c41e44eb9f807e3c3bdf084d1c47ed26 Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.720805 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xbn9h"] Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.760381 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\") " pod="openstack/glance-default-external-api-0" Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.787916 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 11:56:06 crc kubenswrapper[4706]: W1125 11:56:06.800633 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9392449e_c392_4d77_b36a_67b6d8c716c7.slice/crio-0b63784f4e9d790670ac0533e443398bfd97f89108e44b97598fc8eedd2ed3a0 WatchSource:0}: Error finding container 0b63784f4e9d790670ac0533e443398bfd97f89108e44b97598fc8eedd2ed3a0: Status 404 returned error can't find the container with id 0b63784f4e9d790670ac0533e443398bfd97f89108e44b97598fc8eedd2ed3a0 Nov 25 11:56:06 crc kubenswrapper[4706]: I1125 11:56:06.862851 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 11:56:07 crc kubenswrapper[4706]: I1125 11:56:07.405399 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xbn9h" event={"ID":"4586fb7b-8269-4dca-87d4-f3c66518b999","Type":"ContainerStarted","Data":"6b810764d35ead1f050b80c6c6624b912e1e9a1ea6ace0dac10af543213a2552"} Nov 25 11:56:07 crc kubenswrapper[4706]: I1125 11:56:07.406032 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xbn9h" event={"ID":"4586fb7b-8269-4dca-87d4-f3c66518b999","Type":"ContainerStarted","Data":"31fffa560a5085faec571632835e43173c0b4debfcc9112a050c441b12c10c86"} Nov 25 11:56:07 crc kubenswrapper[4706]: I1125 11:56:07.412240 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9392449e-c392-4d77-b36a-67b6d8c716c7","Type":"ContainerStarted","Data":"0b63784f4e9d790670ac0533e443398bfd97f89108e44b97598fc8eedd2ed3a0"} Nov 25 11:56:07 crc kubenswrapper[4706]: I1125 11:56:07.414166 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6899b4bd6f-vwrfh" event={"ID":"c785321d-b637-4f3a-9e69-bc237eb1e9c2","Type":"ContainerStarted","Data":"4817762576b72f1f7ec6a73dfc5771238bc51194d1e5bb978c08087145039f4d"} Nov 25 11:56:07 crc kubenswrapper[4706]: I1125 11:56:07.417976 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ntkr9" event={"ID":"fff3e0d5-0608-4e15-9a92-376b6a2b7d17","Type":"ContainerStarted","Data":"5d06646f2e40933938174b706f1cbfb7279ba1f4da52a991d69893ade768872e"} Nov 25 11:56:07 crc kubenswrapper[4706]: I1125 11:56:07.424509 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5d6465f55b-zdrth" event={"ID":"74b33eb1-0020-4037-918c-9e747dcfd61f","Type":"ContainerStarted","Data":"779cce40cf4cc4947bddf2063a31d045574d3997800d880ef7c40c01c42a4f70"} Nov 25 11:56:07 crc kubenswrapper[4706]: I1125 11:56:07.429843 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-xbn9h" podStartSLOduration=14.42982588 podStartE2EDuration="14.42982588s" podCreationTimestamp="2025-11-25 11:55:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:56:07.420189397 +0000 UTC m=+1176.334746778" watchObservedRunningTime="2025-11-25 11:56:07.42982588 +0000 UTC m=+1176.344383261" Nov 25 11:56:07 crc kubenswrapper[4706]: I1125 11:56:07.434499 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f66ccf8d9-g7z69" event={"ID":"a2972ef2-0543-48bd-9982-4f1c88711e0d","Type":"ContainerStarted","Data":"63a95daf4ab5d5a244b24ec8e7154621aad984a12bcb6a3a7d6be1c0e61157e0"} Nov 25 11:56:07 crc kubenswrapper[4706]: I1125 11:56:07.434548 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f66ccf8d9-g7z69" event={"ID":"a2972ef2-0543-48bd-9982-4f1c88711e0d","Type":"ContainerStarted","Data":"2a410c3eb2cffad7492f2f267cf609f01c6deadfc79957b4d1eb2f1a688f7768"} Nov 25 11:56:07 crc kubenswrapper[4706]: I1125 11:56:07.434596 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6f66ccf8d9-g7z69" podUID="a2972ef2-0543-48bd-9982-4f1c88711e0d" containerName="horizon" containerID="cri-o://63a95daf4ab5d5a244b24ec8e7154621aad984a12bcb6a3a7d6be1c0e61157e0" gracePeriod=30 Nov 25 11:56:07 crc kubenswrapper[4706]: I1125 11:56:07.434574 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6f66ccf8d9-g7z69" podUID="a2972ef2-0543-48bd-9982-4f1c88711e0d" containerName="horizon-log" containerID="cri-o://2a410c3eb2cffad7492f2f267cf609f01c6deadfc79957b4d1eb2f1a688f7768" gracePeriod=30 Nov 25 11:56:07 crc kubenswrapper[4706]: I1125 11:56:07.446634 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 11:56:07 crc kubenswrapper[4706]: I1125 11:56:07.447232 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-85664bf4f6-ws67w" event={"ID":"66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5","Type":"ContainerStarted","Data":"694ffaa5f18d054342c5bd7209ec2560950612752aa2801a2ff348c629194510"} Nov 25 11:56:07 crc kubenswrapper[4706]: I1125 11:56:07.447259 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-85664bf4f6-ws67w" event={"ID":"66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5","Type":"ContainerStarted","Data":"12fc4f1a9cbdbc56fa3a00572a7e7a59c41e44eb9f807e3c3bdf084d1c47ed26"} Nov 25 11:56:07 crc kubenswrapper[4706]: I1125 11:56:07.455003 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-ntkr9" podStartSLOduration=6.854081021 podStartE2EDuration="32.454987863s" podCreationTimestamp="2025-11-25 11:55:35 +0000 UTC" firstStartedPulling="2025-11-25 11:55:38.915592608 +0000 UTC m=+1147.830149989" lastFinishedPulling="2025-11-25 11:56:04.51649945 +0000 UTC m=+1173.431056831" observedRunningTime="2025-11-25 11:56:07.442763166 +0000 UTC m=+1176.357320547" watchObservedRunningTime="2025-11-25 11:56:07.454987863 +0000 UTC m=+1176.369545244" Nov 25 11:56:07 crc kubenswrapper[4706]: I1125 11:56:07.474353 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6f66ccf8d9-g7z69" podStartSLOduration=4.956788796 podStartE2EDuration="30.47432821s" podCreationTimestamp="2025-11-25 11:55:37 +0000 UTC" firstStartedPulling="2025-11-25 11:55:39.001257114 +0000 UTC m=+1147.915814495" lastFinishedPulling="2025-11-25 11:56:04.518796518 +0000 UTC m=+1173.433353909" observedRunningTime="2025-11-25 11:56:07.470121714 +0000 UTC m=+1176.384679115" watchObservedRunningTime="2025-11-25 11:56:07.47432821 +0000 UTC m=+1176.388885591" Nov 25 11:56:07 crc kubenswrapper[4706]: W1125 11:56:07.862014 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fb9e8f3_e03d_40bd_ba5c_8ce7715af21f.slice/crio-15dbd546f22d882eda5dae12c0821ef606a618ec396a6d36996fb7875d89239d WatchSource:0}: Error finding container 15dbd546f22d882eda5dae12c0821ef606a618ec396a6d36996fb7875d89239d: Status 404 returned error can't find the container with id 15dbd546f22d882eda5dae12c0821ef606a618ec396a6d36996fb7875d89239d Nov 25 11:56:07 crc kubenswrapper[4706]: I1125 11:56:07.962064 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0c3d5f1-1ac9-4f5f-bef2-232cf6055061" path="/var/lib/kubelet/pods/e0c3d5f1-1ac9-4f5f-bef2-232cf6055061/volumes" Nov 25 11:56:08 crc kubenswrapper[4706]: I1125 11:56:08.096745 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6f66ccf8d9-g7z69" Nov 25 11:56:08 crc kubenswrapper[4706]: I1125 11:56:08.460006 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5d6465f55b-zdrth" event={"ID":"74b33eb1-0020-4037-918c-9e747dcfd61f","Type":"ContainerStarted","Data":"5f702a091e203894b9c68bd117079bc8a175269c6b226c33e9f95d472f2849bf"} Nov 25 11:56:08 crc kubenswrapper[4706]: I1125 11:56:08.466749 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-85664bf4f6-ws67w" event={"ID":"66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5","Type":"ContainerStarted","Data":"f93f7dfc3d74eeac9a1206302dba951d487f36961f2359ae4cfa7417e1387cd5"} Nov 25 11:56:08 crc kubenswrapper[4706]: I1125 11:56:08.468921 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f","Type":"ContainerStarted","Data":"15dbd546f22d882eda5dae12c0821ef606a618ec396a6d36996fb7875d89239d"} Nov 25 11:56:08 crc kubenswrapper[4706]: I1125 11:56:08.470760 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db4e7aed-28ec-49cd-8f0b-e01df112bf54","Type":"ContainerStarted","Data":"8fc86a2c1073d99eefaa9c298eca352f7130fb64903b505f7a478749a7d6acc1"} Nov 25 11:56:08 crc kubenswrapper[4706]: I1125 11:56:08.472258 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9392449e-c392-4d77-b36a-67b6d8c716c7","Type":"ContainerStarted","Data":"17525079762a657aaaa7ddedbe78c41ea63e1654951381a5ee6b864ec29cb169"} Nov 25 11:56:08 crc kubenswrapper[4706]: I1125 11:56:08.473796 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6899b4bd6f-vwrfh" event={"ID":"c785321d-b637-4f3a-9e69-bc237eb1e9c2","Type":"ContainerStarted","Data":"c4a013e0fb3180c3b1cbcb24ceee6c1e232c442bf84a3c119951be9b3e401dad"} Nov 25 11:56:08 crc kubenswrapper[4706]: I1125 11:56:08.474575 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6899b4bd6f-vwrfh" podUID="c785321d-b637-4f3a-9e69-bc237eb1e9c2" containerName="horizon-log" containerID="cri-o://4817762576b72f1f7ec6a73dfc5771238bc51194d1e5bb978c08087145039f4d" gracePeriod=30 Nov 25 11:56:08 crc kubenswrapper[4706]: I1125 11:56:08.474681 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6899b4bd6f-vwrfh" podUID="c785321d-b637-4f3a-9e69-bc237eb1e9c2" containerName="horizon" containerID="cri-o://c4a013e0fb3180c3b1cbcb24ceee6c1e232c442bf84a3c119951be9b3e401dad" gracePeriod=30 Nov 25 11:56:08 crc kubenswrapper[4706]: I1125 11:56:08.497136 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5d6465f55b-zdrth" podStartSLOduration=24.49709604 podStartE2EDuration="24.49709604s" podCreationTimestamp="2025-11-25 11:55:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:56:08.488406812 +0000 UTC m=+1177.402964193" watchObservedRunningTime="2025-11-25 11:56:08.49709604 +0000 UTC m=+1177.411653441" Nov 25 11:56:08 crc kubenswrapper[4706]: I1125 11:56:08.511678 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-85664bf4f6-ws67w" podStartSLOduration=24.511643256 podStartE2EDuration="24.511643256s" podCreationTimestamp="2025-11-25 11:55:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:56:08.509657976 +0000 UTC m=+1177.424215357" watchObservedRunningTime="2025-11-25 11:56:08.511643256 +0000 UTC m=+1177.426200637" Nov 25 11:56:08 crc kubenswrapper[4706]: I1125 11:56:08.532802 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6899b4bd6f-vwrfh" podStartSLOduration=6.270561775 podStartE2EDuration="33.532527992s" podCreationTimestamp="2025-11-25 11:55:35 +0000 UTC" firstStartedPulling="2025-11-25 11:55:38.916945352 +0000 UTC m=+1147.831502733" lastFinishedPulling="2025-11-25 11:56:06.178911569 +0000 UTC m=+1175.093468950" observedRunningTime="2025-11-25 11:56:08.527586618 +0000 UTC m=+1177.442144019" watchObservedRunningTime="2025-11-25 11:56:08.532527992 +0000 UTC m=+1177.447085373" Nov 25 11:56:09 crc kubenswrapper[4706]: I1125 11:56:09.418658 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-dqgdx" podUID="d377cf62-3246-4d83-86b8-f55d354a2d5c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Nov 25 11:56:09 crc kubenswrapper[4706]: I1125 11:56:09.482979 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f","Type":"ContainerStarted","Data":"b5e1097ae896ce3cc97fa565106e38e6095eb00fc75f3d3d729b4dea2824be11"} Nov 25 11:56:09 crc kubenswrapper[4706]: I1125 11:56:09.484690 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9392449e-c392-4d77-b36a-67b6d8c716c7","Type":"ContainerStarted","Data":"ad219d52a5cb7380348da742495450a2737dd6d4946c87d7529be684c28d8619"} Nov 25 11:56:10 crc kubenswrapper[4706]: I1125 11:56:10.515280 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.515259062 podStartE2EDuration="6.515259062s" podCreationTimestamp="2025-11-25 11:56:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:56:10.508877612 +0000 UTC m=+1179.423435003" watchObservedRunningTime="2025-11-25 11:56:10.515259062 +0000 UTC m=+1179.429816443" Nov 25 11:56:12 crc kubenswrapper[4706]: I1125 11:56:12.521841 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f","Type":"ContainerStarted","Data":"cea2a1a48ebbafa7abdc43558125cc84b06d937577b4fc75c50451664c420801"} Nov 25 11:56:13 crc kubenswrapper[4706]: I1125 11:56:13.580803 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.580780082 podStartE2EDuration="7.580780082s" podCreationTimestamp="2025-11-25 11:56:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:56:13.573627082 +0000 UTC m=+1182.488184483" watchObservedRunningTime="2025-11-25 11:56:13.580780082 +0000 UTC m=+1182.495337463" Nov 25 11:56:14 crc kubenswrapper[4706]: I1125 11:56:14.682174 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5d6465f55b-zdrth" Nov 25 11:56:14 crc kubenswrapper[4706]: I1125 11:56:14.682526 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5d6465f55b-zdrth" Nov 25 11:56:14 crc kubenswrapper[4706]: I1125 11:56:14.750748 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-85664bf4f6-ws67w" Nov 25 11:56:14 crc kubenswrapper[4706]: I1125 11:56:14.752124 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-85664bf4f6-ws67w" Nov 25 11:56:14 crc kubenswrapper[4706]: I1125 11:56:14.766874 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 25 11:56:14 crc kubenswrapper[4706]: I1125 11:56:14.766980 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 25 11:56:14 crc kubenswrapper[4706]: I1125 11:56:14.806747 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 25 11:56:14 crc kubenswrapper[4706]: I1125 11:56:14.837855 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 25 11:56:15 crc kubenswrapper[4706]: I1125 11:56:15.571969 4706 generic.go:334] "Generic (PLEG): container finished" podID="4586fb7b-8269-4dca-87d4-f3c66518b999" containerID="6b810764d35ead1f050b80c6c6624b912e1e9a1ea6ace0dac10af543213a2552" exitCode=0 Nov 25 11:56:15 crc kubenswrapper[4706]: I1125 11:56:15.572051 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xbn9h" event={"ID":"4586fb7b-8269-4dca-87d4-f3c66518b999","Type":"ContainerDied","Data":"6b810764d35ead1f050b80c6c6624b912e1e9a1ea6ace0dac10af543213a2552"} Nov 25 11:56:15 crc kubenswrapper[4706]: I1125 11:56:15.572614 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 25 11:56:15 crc kubenswrapper[4706]: I1125 11:56:15.572673 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 25 11:56:16 crc kubenswrapper[4706]: I1125 11:56:16.172228 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6899b4bd6f-vwrfh" Nov 25 11:56:16 crc kubenswrapper[4706]: I1125 11:56:16.583241 4706 generic.go:334] "Generic (PLEG): container finished" podID="fff3e0d5-0608-4e15-9a92-376b6a2b7d17" containerID="5d06646f2e40933938174b706f1cbfb7279ba1f4da52a991d69893ade768872e" exitCode=0 Nov 25 11:56:16 crc kubenswrapper[4706]: I1125 11:56:16.583345 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ntkr9" event={"ID":"fff3e0d5-0608-4e15-9a92-376b6a2b7d17","Type":"ContainerDied","Data":"5d06646f2e40933938174b706f1cbfb7279ba1f4da52a991d69893ade768872e"} Nov 25 11:56:16 crc kubenswrapper[4706]: I1125 11:56:16.863469 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 25 11:56:16 crc kubenswrapper[4706]: I1125 11:56:16.863822 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 25 11:56:16 crc kubenswrapper[4706]: I1125 11:56:16.908832 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 25 11:56:16 crc kubenswrapper[4706]: I1125 11:56:16.946526 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 25 11:56:17 crc kubenswrapper[4706]: I1125 11:56:17.591371 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 11:56:17 crc kubenswrapper[4706]: I1125 11:56:17.591424 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 11:56:17 crc kubenswrapper[4706]: I1125 11:56:17.767396 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 25 11:56:17 crc kubenswrapper[4706]: I1125 11:56:17.767505 4706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 11:56:17 crc kubenswrapper[4706]: I1125 11:56:17.850556 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.611181 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xbn9h" Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.627059 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ntkr9" Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.668537 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ntkr9" event={"ID":"fff3e0d5-0608-4e15-9a92-376b6a2b7d17","Type":"ContainerDied","Data":"c0f9fa42b710cbeabc270be3787e6cbb65cf5c657bbb33d07043233eb7c0be34"} Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.668577 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0f9fa42b710cbeabc270be3787e6cbb65cf5c657bbb33d07043233eb7c0be34" Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.668623 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ntkr9" Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.673995 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xbn9h" event={"ID":"4586fb7b-8269-4dca-87d4-f3c66518b999","Type":"ContainerDied","Data":"31fffa560a5085faec571632835e43173c0b4debfcc9112a050c441b12c10c86"} Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.674023 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31fffa560a5085faec571632835e43173c0b4debfcc9112a050c441b12c10c86" Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.674068 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xbn9h" Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.678848 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c25zl\" (UniqueName: \"kubernetes.io/projected/fff3e0d5-0608-4e15-9a92-376b6a2b7d17-kube-api-access-c25zl\") pod \"fff3e0d5-0608-4e15-9a92-376b6a2b7d17\" (UID: \"fff3e0d5-0608-4e15-9a92-376b6a2b7d17\") " Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.678885 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fff3e0d5-0608-4e15-9a92-376b6a2b7d17-logs\") pod \"fff3e0d5-0608-4e15-9a92-376b6a2b7d17\" (UID: \"fff3e0d5-0608-4e15-9a92-376b6a2b7d17\") " Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.678916 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4586fb7b-8269-4dca-87d4-f3c66518b999-fernet-keys\") pod \"4586fb7b-8269-4dca-87d4-f3c66518b999\" (UID: \"4586fb7b-8269-4dca-87d4-f3c66518b999\") " Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.678947 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fff3e0d5-0608-4e15-9a92-376b6a2b7d17-scripts\") pod \"fff3e0d5-0608-4e15-9a92-376b6a2b7d17\" (UID: \"fff3e0d5-0608-4e15-9a92-376b6a2b7d17\") " Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.678983 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4586fb7b-8269-4dca-87d4-f3c66518b999-config-data\") pod \"4586fb7b-8269-4dca-87d4-f3c66518b999\" (UID: \"4586fb7b-8269-4dca-87d4-f3c66518b999\") " Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.679002 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4586fb7b-8269-4dca-87d4-f3c66518b999-credential-keys\") pod \"4586fb7b-8269-4dca-87d4-f3c66518b999\" (UID: \"4586fb7b-8269-4dca-87d4-f3c66518b999\") " Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.679043 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fff3e0d5-0608-4e15-9a92-376b6a2b7d17-config-data\") pod \"fff3e0d5-0608-4e15-9a92-376b6a2b7d17\" (UID: \"fff3e0d5-0608-4e15-9a92-376b6a2b7d17\") " Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.679092 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff3e0d5-0608-4e15-9a92-376b6a2b7d17-combined-ca-bundle\") pod \"fff3e0d5-0608-4e15-9a92-376b6a2b7d17\" (UID: \"fff3e0d5-0608-4e15-9a92-376b6a2b7d17\") " Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.679130 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56zs6\" (UniqueName: \"kubernetes.io/projected/4586fb7b-8269-4dca-87d4-f3c66518b999-kube-api-access-56zs6\") pod \"4586fb7b-8269-4dca-87d4-f3c66518b999\" (UID: \"4586fb7b-8269-4dca-87d4-f3c66518b999\") " Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.679187 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4586fb7b-8269-4dca-87d4-f3c66518b999-combined-ca-bundle\") pod \"4586fb7b-8269-4dca-87d4-f3c66518b999\" (UID: \"4586fb7b-8269-4dca-87d4-f3c66518b999\") " Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.679218 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4586fb7b-8269-4dca-87d4-f3c66518b999-scripts\") pod \"4586fb7b-8269-4dca-87d4-f3c66518b999\" (UID: \"4586fb7b-8269-4dca-87d4-f3c66518b999\") " Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.684072 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fff3e0d5-0608-4e15-9a92-376b6a2b7d17-logs" (OuterVolumeSpecName: "logs") pod "fff3e0d5-0608-4e15-9a92-376b6a2b7d17" (UID: "fff3e0d5-0608-4e15-9a92-376b6a2b7d17"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.686798 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4586fb7b-8269-4dca-87d4-f3c66518b999-scripts" (OuterVolumeSpecName: "scripts") pod "4586fb7b-8269-4dca-87d4-f3c66518b999" (UID: "4586fb7b-8269-4dca-87d4-f3c66518b999"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.691877 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4586fb7b-8269-4dca-87d4-f3c66518b999-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "4586fb7b-8269-4dca-87d4-f3c66518b999" (UID: "4586fb7b-8269-4dca-87d4-f3c66518b999"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.691994 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4586fb7b-8269-4dca-87d4-f3c66518b999-kube-api-access-56zs6" (OuterVolumeSpecName: "kube-api-access-56zs6") pod "4586fb7b-8269-4dca-87d4-f3c66518b999" (UID: "4586fb7b-8269-4dca-87d4-f3c66518b999"). InnerVolumeSpecName "kube-api-access-56zs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.694077 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fff3e0d5-0608-4e15-9a92-376b6a2b7d17-kube-api-access-c25zl" (OuterVolumeSpecName: "kube-api-access-c25zl") pod "fff3e0d5-0608-4e15-9a92-376b6a2b7d17" (UID: "fff3e0d5-0608-4e15-9a92-376b6a2b7d17"). InnerVolumeSpecName "kube-api-access-c25zl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.698137 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4586fb7b-8269-4dca-87d4-f3c66518b999-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4586fb7b-8269-4dca-87d4-f3c66518b999" (UID: "4586fb7b-8269-4dca-87d4-f3c66518b999"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.712703 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fff3e0d5-0608-4e15-9a92-376b6a2b7d17-scripts" (OuterVolumeSpecName: "scripts") pod "fff3e0d5-0608-4e15-9a92-376b6a2b7d17" (UID: "fff3e0d5-0608-4e15-9a92-376b6a2b7d17"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.728107 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4586fb7b-8269-4dca-87d4-f3c66518b999-config-data" (OuterVolumeSpecName: "config-data") pod "4586fb7b-8269-4dca-87d4-f3c66518b999" (UID: "4586fb7b-8269-4dca-87d4-f3c66518b999"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.733057 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4586fb7b-8269-4dca-87d4-f3c66518b999-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4586fb7b-8269-4dca-87d4-f3c66518b999" (UID: "4586fb7b-8269-4dca-87d4-f3c66518b999"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.738205 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fff3e0d5-0608-4e15-9a92-376b6a2b7d17-config-data" (OuterVolumeSpecName: "config-data") pod "fff3e0d5-0608-4e15-9a92-376b6a2b7d17" (UID: "fff3e0d5-0608-4e15-9a92-376b6a2b7d17"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.769502 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fff3e0d5-0608-4e15-9a92-376b6a2b7d17-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fff3e0d5-0608-4e15-9a92-376b6a2b7d17" (UID: "fff3e0d5-0608-4e15-9a92-376b6a2b7d17"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.781602 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56zs6\" (UniqueName: \"kubernetes.io/projected/4586fb7b-8269-4dca-87d4-f3c66518b999-kube-api-access-56zs6\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.781638 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4586fb7b-8269-4dca-87d4-f3c66518b999-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.781648 4706 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4586fb7b-8269-4dca-87d4-f3c66518b999-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.781657 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c25zl\" (UniqueName: \"kubernetes.io/projected/fff3e0d5-0608-4e15-9a92-376b6a2b7d17-kube-api-access-c25zl\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.781667 4706 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fff3e0d5-0608-4e15-9a92-376b6a2b7d17-logs\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.781675 4706 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4586fb7b-8269-4dca-87d4-f3c66518b999-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.781683 4706 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fff3e0d5-0608-4e15-9a92-376b6a2b7d17-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.781691 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4586fb7b-8269-4dca-87d4-f3c66518b999-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.781698 4706 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4586fb7b-8269-4dca-87d4-f3c66518b999-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.781706 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fff3e0d5-0608-4e15-9a92-376b6a2b7d17-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:19 crc kubenswrapper[4706]: I1125 11:56:19.781713 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff3e0d5-0608-4e15-9a92-376b6a2b7d17-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:20 crc kubenswrapper[4706]: I1125 11:56:20.162079 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 25 11:56:20 crc kubenswrapper[4706]: I1125 11:56:20.162434 4706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 11:56:20 crc kubenswrapper[4706]: I1125 11:56:20.163832 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 25 11:56:20 crc kubenswrapper[4706]: I1125 11:56:20.712999 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db4e7aed-28ec-49cd-8f0b-e01df112bf54","Type":"ContainerStarted","Data":"1b0f99a9c2d7134db91d0dc3c0f7d3e579a75185b06822b489e2cf538487e522"} Nov 25 11:56:20 crc kubenswrapper[4706]: I1125 11:56:20.772449 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-854bff779d-k8bjv"] Nov 25 11:56:20 crc kubenswrapper[4706]: E1125 11:56:20.772881 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fff3e0d5-0608-4e15-9a92-376b6a2b7d17" containerName="placement-db-sync" Nov 25 11:56:20 crc kubenswrapper[4706]: I1125 11:56:20.772902 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="fff3e0d5-0608-4e15-9a92-376b6a2b7d17" containerName="placement-db-sync" Nov 25 11:56:20 crc kubenswrapper[4706]: E1125 11:56:20.772918 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4586fb7b-8269-4dca-87d4-f3c66518b999" containerName="keystone-bootstrap" Nov 25 11:56:20 crc kubenswrapper[4706]: I1125 11:56:20.772928 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="4586fb7b-8269-4dca-87d4-f3c66518b999" containerName="keystone-bootstrap" Nov 25 11:56:20 crc kubenswrapper[4706]: I1125 11:56:20.773120 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="4586fb7b-8269-4dca-87d4-f3c66518b999" containerName="keystone-bootstrap" Nov 25 11:56:20 crc kubenswrapper[4706]: I1125 11:56:20.773148 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="fff3e0d5-0608-4e15-9a92-376b6a2b7d17" containerName="placement-db-sync" Nov 25 11:56:20 crc kubenswrapper[4706]: I1125 11:56:20.785002 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-854bff779d-k8bjv" Nov 25 11:56:20 crc kubenswrapper[4706]: I1125 11:56:20.789414 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 11:56:20 crc kubenswrapper[4706]: I1125 11:56:20.789655 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 11:56:20 crc kubenswrapper[4706]: I1125 11:56:20.789832 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 11:56:20 crc kubenswrapper[4706]: I1125 11:56:20.790015 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 25 11:56:20 crc kubenswrapper[4706]: I1125 11:56:20.790124 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 25 11:56:20 crc kubenswrapper[4706]: I1125 11:56:20.790233 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-p74gc" Nov 25 11:56:20 crc kubenswrapper[4706]: I1125 11:56:20.799654 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-854bff779d-k8bjv"] Nov 25 11:56:20 crc kubenswrapper[4706]: I1125 11:56:20.869283 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5bfcb97b8-lmwjc"] Nov 25 11:56:20 crc kubenswrapper[4706]: I1125 11:56:20.870950 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5bfcb97b8-lmwjc" Nov 25 11:56:20 crc kubenswrapper[4706]: I1125 11:56:20.873987 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 25 11:56:20 crc kubenswrapper[4706]: I1125 11:56:20.874221 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 25 11:56:20 crc kubenswrapper[4706]: I1125 11:56:20.874357 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-wfhgp" Nov 25 11:56:20 crc kubenswrapper[4706]: I1125 11:56:20.874434 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 25 11:56:20 crc kubenswrapper[4706]: I1125 11:56:20.874466 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 25 11:56:20 crc kubenswrapper[4706]: I1125 11:56:20.893803 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5bfcb97b8-lmwjc"] Nov 25 11:56:20 crc kubenswrapper[4706]: I1125 11:56:20.913177 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e-credential-keys\") pod \"keystone-854bff779d-k8bjv\" (UID: \"df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e\") " pod="openstack/keystone-854bff779d-k8bjv" Nov 25 11:56:20 crc kubenswrapper[4706]: I1125 11:56:20.913238 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e-scripts\") pod \"keystone-854bff779d-k8bjv\" (UID: \"df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e\") " pod="openstack/keystone-854bff779d-k8bjv" Nov 25 11:56:20 crc kubenswrapper[4706]: I1125 11:56:20.913292 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e-combined-ca-bundle\") pod \"keystone-854bff779d-k8bjv\" (UID: \"df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e\") " pod="openstack/keystone-854bff779d-k8bjv" Nov 25 11:56:20 crc kubenswrapper[4706]: I1125 11:56:20.913350 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e-internal-tls-certs\") pod \"keystone-854bff779d-k8bjv\" (UID: \"df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e\") " pod="openstack/keystone-854bff779d-k8bjv" Nov 25 11:56:20 crc kubenswrapper[4706]: I1125 11:56:20.913389 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e-config-data\") pod \"keystone-854bff779d-k8bjv\" (UID: \"df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e\") " pod="openstack/keystone-854bff779d-k8bjv" Nov 25 11:56:20 crc kubenswrapper[4706]: I1125 11:56:20.913405 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e-public-tls-certs\") pod \"keystone-854bff779d-k8bjv\" (UID: \"df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e\") " pod="openstack/keystone-854bff779d-k8bjv" Nov 25 11:56:20 crc kubenswrapper[4706]: I1125 11:56:20.913447 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5k4l\" (UniqueName: \"kubernetes.io/projected/df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e-kube-api-access-p5k4l\") pod \"keystone-854bff779d-k8bjv\" (UID: \"df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e\") " pod="openstack/keystone-854bff779d-k8bjv" Nov 25 11:56:20 crc kubenswrapper[4706]: I1125 11:56:20.913483 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e-fernet-keys\") pod \"keystone-854bff779d-k8bjv\" (UID: \"df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e\") " pod="openstack/keystone-854bff779d-k8bjv" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.015105 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e-public-tls-certs\") pod \"keystone-854bff779d-k8bjv\" (UID: \"df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e\") " pod="openstack/keystone-854bff779d-k8bjv" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.015488 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dab0780-5792-4f20-9553-a780aa94ebba-scripts\") pod \"placement-5bfcb97b8-lmwjc\" (UID: \"2dab0780-5792-4f20-9553-a780aa94ebba\") " pod="openstack/placement-5bfcb97b8-lmwjc" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.015507 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dab0780-5792-4f20-9553-a780aa94ebba-config-data\") pod \"placement-5bfcb97b8-lmwjc\" (UID: \"2dab0780-5792-4f20-9553-a780aa94ebba\") " pod="openstack/placement-5bfcb97b8-lmwjc" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.015547 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5k4l\" (UniqueName: \"kubernetes.io/projected/df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e-kube-api-access-p5k4l\") pod \"keystone-854bff779d-k8bjv\" (UID: \"df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e\") " pod="openstack/keystone-854bff779d-k8bjv" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.015588 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gktp5\" (UniqueName: \"kubernetes.io/projected/2dab0780-5792-4f20-9553-a780aa94ebba-kube-api-access-gktp5\") pod \"placement-5bfcb97b8-lmwjc\" (UID: \"2dab0780-5792-4f20-9553-a780aa94ebba\") " pod="openstack/placement-5bfcb97b8-lmwjc" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.015609 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e-fernet-keys\") pod \"keystone-854bff779d-k8bjv\" (UID: \"df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e\") " pod="openstack/keystone-854bff779d-k8bjv" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.015667 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2dab0780-5792-4f20-9553-a780aa94ebba-public-tls-certs\") pod \"placement-5bfcb97b8-lmwjc\" (UID: \"2dab0780-5792-4f20-9553-a780aa94ebba\") " pod="openstack/placement-5bfcb97b8-lmwjc" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.015687 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dab0780-5792-4f20-9553-a780aa94ebba-combined-ca-bundle\") pod \"placement-5bfcb97b8-lmwjc\" (UID: \"2dab0780-5792-4f20-9553-a780aa94ebba\") " pod="openstack/placement-5bfcb97b8-lmwjc" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.015705 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e-credential-keys\") pod \"keystone-854bff779d-k8bjv\" (UID: \"df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e\") " pod="openstack/keystone-854bff779d-k8bjv" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.015754 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e-scripts\") pod \"keystone-854bff779d-k8bjv\" (UID: \"df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e\") " pod="openstack/keystone-854bff779d-k8bjv" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.015803 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2dab0780-5792-4f20-9553-a780aa94ebba-internal-tls-certs\") pod \"placement-5bfcb97b8-lmwjc\" (UID: \"2dab0780-5792-4f20-9553-a780aa94ebba\") " pod="openstack/placement-5bfcb97b8-lmwjc" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.015825 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e-combined-ca-bundle\") pod \"keystone-854bff779d-k8bjv\" (UID: \"df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e\") " pod="openstack/keystone-854bff779d-k8bjv" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.015843 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2dab0780-5792-4f20-9553-a780aa94ebba-logs\") pod \"placement-5bfcb97b8-lmwjc\" (UID: \"2dab0780-5792-4f20-9553-a780aa94ebba\") " pod="openstack/placement-5bfcb97b8-lmwjc" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.015859 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e-internal-tls-certs\") pod \"keystone-854bff779d-k8bjv\" (UID: \"df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e\") " pod="openstack/keystone-854bff779d-k8bjv" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.015887 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e-config-data\") pod \"keystone-854bff779d-k8bjv\" (UID: \"df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e\") " pod="openstack/keystone-854bff779d-k8bjv" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.019329 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e-public-tls-certs\") pod \"keystone-854bff779d-k8bjv\" (UID: \"df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e\") " pod="openstack/keystone-854bff779d-k8bjv" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.020610 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e-fernet-keys\") pod \"keystone-854bff779d-k8bjv\" (UID: \"df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e\") " pod="openstack/keystone-854bff779d-k8bjv" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.020863 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e-scripts\") pod \"keystone-854bff779d-k8bjv\" (UID: \"df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e\") " pod="openstack/keystone-854bff779d-k8bjv" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.024263 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e-internal-tls-certs\") pod \"keystone-854bff779d-k8bjv\" (UID: \"df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e\") " pod="openstack/keystone-854bff779d-k8bjv" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.027670 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e-combined-ca-bundle\") pod \"keystone-854bff779d-k8bjv\" (UID: \"df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e\") " pod="openstack/keystone-854bff779d-k8bjv" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.029341 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e-credential-keys\") pod \"keystone-854bff779d-k8bjv\" (UID: \"df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e\") " pod="openstack/keystone-854bff779d-k8bjv" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.029940 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e-config-data\") pod \"keystone-854bff779d-k8bjv\" (UID: \"df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e\") " pod="openstack/keystone-854bff779d-k8bjv" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.058981 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5k4l\" (UniqueName: \"kubernetes.io/projected/df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e-kube-api-access-p5k4l\") pod \"keystone-854bff779d-k8bjv\" (UID: \"df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e\") " pod="openstack/keystone-854bff779d-k8bjv" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.117214 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2dab0780-5792-4f20-9553-a780aa94ebba-internal-tls-certs\") pod \"placement-5bfcb97b8-lmwjc\" (UID: \"2dab0780-5792-4f20-9553-a780aa94ebba\") " pod="openstack/placement-5bfcb97b8-lmwjc" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.117274 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2dab0780-5792-4f20-9553-a780aa94ebba-logs\") pod \"placement-5bfcb97b8-lmwjc\" (UID: \"2dab0780-5792-4f20-9553-a780aa94ebba\") " pod="openstack/placement-5bfcb97b8-lmwjc" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.117342 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dab0780-5792-4f20-9553-a780aa94ebba-scripts\") pod \"placement-5bfcb97b8-lmwjc\" (UID: \"2dab0780-5792-4f20-9553-a780aa94ebba\") " pod="openstack/placement-5bfcb97b8-lmwjc" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.117361 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dab0780-5792-4f20-9553-a780aa94ebba-config-data\") pod \"placement-5bfcb97b8-lmwjc\" (UID: \"2dab0780-5792-4f20-9553-a780aa94ebba\") " pod="openstack/placement-5bfcb97b8-lmwjc" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.117391 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gktp5\" (UniqueName: \"kubernetes.io/projected/2dab0780-5792-4f20-9553-a780aa94ebba-kube-api-access-gktp5\") pod \"placement-5bfcb97b8-lmwjc\" (UID: \"2dab0780-5792-4f20-9553-a780aa94ebba\") " pod="openstack/placement-5bfcb97b8-lmwjc" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.117432 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2dab0780-5792-4f20-9553-a780aa94ebba-public-tls-certs\") pod \"placement-5bfcb97b8-lmwjc\" (UID: \"2dab0780-5792-4f20-9553-a780aa94ebba\") " pod="openstack/placement-5bfcb97b8-lmwjc" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.117451 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dab0780-5792-4f20-9553-a780aa94ebba-combined-ca-bundle\") pod \"placement-5bfcb97b8-lmwjc\" (UID: \"2dab0780-5792-4f20-9553-a780aa94ebba\") " pod="openstack/placement-5bfcb97b8-lmwjc" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.119093 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2dab0780-5792-4f20-9553-a780aa94ebba-logs\") pod \"placement-5bfcb97b8-lmwjc\" (UID: \"2dab0780-5792-4f20-9553-a780aa94ebba\") " pod="openstack/placement-5bfcb97b8-lmwjc" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.122132 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2dab0780-5792-4f20-9553-a780aa94ebba-internal-tls-certs\") pod \"placement-5bfcb97b8-lmwjc\" (UID: \"2dab0780-5792-4f20-9553-a780aa94ebba\") " pod="openstack/placement-5bfcb97b8-lmwjc" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.122415 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dab0780-5792-4f20-9553-a780aa94ebba-scripts\") pod \"placement-5bfcb97b8-lmwjc\" (UID: \"2dab0780-5792-4f20-9553-a780aa94ebba\") " pod="openstack/placement-5bfcb97b8-lmwjc" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.124562 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dab0780-5792-4f20-9553-a780aa94ebba-config-data\") pod \"placement-5bfcb97b8-lmwjc\" (UID: \"2dab0780-5792-4f20-9553-a780aa94ebba\") " pod="openstack/placement-5bfcb97b8-lmwjc" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.127219 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2dab0780-5792-4f20-9553-a780aa94ebba-public-tls-certs\") pod \"placement-5bfcb97b8-lmwjc\" (UID: \"2dab0780-5792-4f20-9553-a780aa94ebba\") " pod="openstack/placement-5bfcb97b8-lmwjc" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.127999 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dab0780-5792-4f20-9553-a780aa94ebba-combined-ca-bundle\") pod \"placement-5bfcb97b8-lmwjc\" (UID: \"2dab0780-5792-4f20-9553-a780aa94ebba\") " pod="openstack/placement-5bfcb97b8-lmwjc" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.142163 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gktp5\" (UniqueName: \"kubernetes.io/projected/2dab0780-5792-4f20-9553-a780aa94ebba-kube-api-access-gktp5\") pod \"placement-5bfcb97b8-lmwjc\" (UID: \"2dab0780-5792-4f20-9553-a780aa94ebba\") " pod="openstack/placement-5bfcb97b8-lmwjc" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.186747 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-854bff779d-k8bjv" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.225431 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5bfcb97b8-lmwjc" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.715362 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-854bff779d-k8bjv"] Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.722532 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-v6lvb" event={"ID":"08ef6ec0-ba09-40a2-94d0-a1ddbba8644a","Type":"ContainerStarted","Data":"ed658060da60348d51178754a8fc3e5be804e83ded14e615faea142e1c49e58d"} Nov 25 11:56:21 crc kubenswrapper[4706]: W1125 11:56:21.723647 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf1ddb84_cafd_4f7f_b1cf_c6fb37b7e92e.slice/crio-2e001c9277b1f7b5eeeb1c79221f926d09ab0da592052202dc73798474e35c5a WatchSource:0}: Error finding container 2e001c9277b1f7b5eeeb1c79221f926d09ab0da592052202dc73798474e35c5a: Status 404 returned error can't find the container with id 2e001c9277b1f7b5eeeb1c79221f926d09ab0da592052202dc73798474e35c5a Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.743545 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-v6lvb" podStartSLOduration=5.070576585 podStartE2EDuration="46.743526144s" podCreationTimestamp="2025-11-25 11:55:35 +0000 UTC" firstStartedPulling="2025-11-25 11:55:38.982920973 +0000 UTC m=+1147.897478354" lastFinishedPulling="2025-11-25 11:56:20.655870532 +0000 UTC m=+1189.570427913" observedRunningTime="2025-11-25 11:56:21.737285297 +0000 UTC m=+1190.651842678" watchObservedRunningTime="2025-11-25 11:56:21.743526144 +0000 UTC m=+1190.658083525" Nov 25 11:56:21 crc kubenswrapper[4706]: I1125 11:56:21.843657 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5bfcb97b8-lmwjc"] Nov 25 11:56:22 crc kubenswrapper[4706]: I1125 11:56:22.733340 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-854bff779d-k8bjv" event={"ID":"df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e","Type":"ContainerStarted","Data":"f5128ad1264618650af14febac5f7e6e67a1bbdeeeea8aa1510ebf1258cfae69"} Nov 25 11:56:22 crc kubenswrapper[4706]: I1125 11:56:22.733919 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-854bff779d-k8bjv" Nov 25 11:56:22 crc kubenswrapper[4706]: I1125 11:56:22.733935 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-854bff779d-k8bjv" event={"ID":"df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e","Type":"ContainerStarted","Data":"2e001c9277b1f7b5eeeb1c79221f926d09ab0da592052202dc73798474e35c5a"} Nov 25 11:56:22 crc kubenswrapper[4706]: I1125 11:56:22.736063 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5bfcb97b8-lmwjc" event={"ID":"2dab0780-5792-4f20-9553-a780aa94ebba","Type":"ContainerStarted","Data":"100131dbce90be37ef227b634f9093533d91b25f09414b0795c8ccbb2623e232"} Nov 25 11:56:22 crc kubenswrapper[4706]: I1125 11:56:22.736377 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5bfcb97b8-lmwjc" event={"ID":"2dab0780-5792-4f20-9553-a780aa94ebba","Type":"ContainerStarted","Data":"14d672bd61c9d524db6e0d803f379716a89fb96434bbeb7d00132898c98752df"} Nov 25 11:56:22 crc kubenswrapper[4706]: I1125 11:56:22.760225 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-854bff779d-k8bjv" podStartSLOduration=2.760208942 podStartE2EDuration="2.760208942s" podCreationTimestamp="2025-11-25 11:56:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:56:22.755056412 +0000 UTC m=+1191.669613803" watchObservedRunningTime="2025-11-25 11:56:22.760208942 +0000 UTC m=+1191.674766333" Nov 25 11:56:24 crc kubenswrapper[4706]: I1125 11:56:24.683368 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5d6465f55b-zdrth" podUID="74b33eb1-0020-4037-918c-9e747dcfd61f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Nov 25 11:56:24 crc kubenswrapper[4706]: I1125 11:56:24.751927 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-85664bf4f6-ws67w" podUID="66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Nov 25 11:56:24 crc kubenswrapper[4706]: I1125 11:56:24.776118 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-fd7sf" event={"ID":"424f303d-41b7-4fd6-be4a-017148ed95da","Type":"ContainerStarted","Data":"797c773a68a2cefa511a2d83c42ec2cf0c6e8966351b19ccb7c9050e4a68b766"} Nov 25 11:56:24 crc kubenswrapper[4706]: I1125 11:56:24.788618 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5bfcb97b8-lmwjc" event={"ID":"2dab0780-5792-4f20-9553-a780aa94ebba","Type":"ContainerStarted","Data":"1dc502dcefd9793bc94139138f1c509d63c056141596810b692126cb69ce1304"} Nov 25 11:56:24 crc kubenswrapper[4706]: I1125 11:56:24.788828 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5bfcb97b8-lmwjc" Nov 25 11:56:24 crc kubenswrapper[4706]: I1125 11:56:24.788864 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5bfcb97b8-lmwjc" Nov 25 11:56:24 crc kubenswrapper[4706]: I1125 11:56:24.820752 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5bfcb97b8-lmwjc" podStartSLOduration=4.82073195 podStartE2EDuration="4.82073195s" podCreationTimestamp="2025-11-25 11:56:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:56:24.817758685 +0000 UTC m=+1193.732316066" watchObservedRunningTime="2025-11-25 11:56:24.82073195 +0000 UTC m=+1193.735289331" Nov 25 11:56:24 crc kubenswrapper[4706]: I1125 11:56:24.824738 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-fd7sf" podStartSLOduration=4.696595085 podStartE2EDuration="49.82472788s" podCreationTimestamp="2025-11-25 11:55:35 +0000 UTC" firstStartedPulling="2025-11-25 11:55:38.503663862 +0000 UTC m=+1147.418221243" lastFinishedPulling="2025-11-25 11:56:23.631796657 +0000 UTC m=+1192.546354038" observedRunningTime="2025-11-25 11:56:24.793999237 +0000 UTC m=+1193.708556618" watchObservedRunningTime="2025-11-25 11:56:24.82472788 +0000 UTC m=+1193.739285261" Nov 25 11:56:26 crc kubenswrapper[4706]: I1125 11:56:26.826252 4706 generic.go:334] "Generic (PLEG): container finished" podID="08ef6ec0-ba09-40a2-94d0-a1ddbba8644a" containerID="ed658060da60348d51178754a8fc3e5be804e83ded14e615faea142e1c49e58d" exitCode=0 Nov 25 11:56:26 crc kubenswrapper[4706]: I1125 11:56:26.826419 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-v6lvb" event={"ID":"08ef6ec0-ba09-40a2-94d0-a1ddbba8644a","Type":"ContainerDied","Data":"ed658060da60348d51178754a8fc3e5be804e83ded14e615faea142e1c49e58d"} Nov 25 11:56:27 crc kubenswrapper[4706]: I1125 11:56:27.839804 4706 generic.go:334] "Generic (PLEG): container finished" podID="27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf" containerID="69b75dc8ced52c1b496484cab28676106b2584ed034f5af05537be0814a73094" exitCode=0 Nov 25 11:56:27 crc kubenswrapper[4706]: I1125 11:56:27.839921 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hdbbw" event={"ID":"27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf","Type":"ContainerDied","Data":"69b75dc8ced52c1b496484cab28676106b2584ed034f5af05537be0814a73094"} Nov 25 11:56:28 crc kubenswrapper[4706]: I1125 11:56:28.658490 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-v6lvb" Nov 25 11:56:28 crc kubenswrapper[4706]: I1125 11:56:28.768189 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08ef6ec0-ba09-40a2-94d0-a1ddbba8644a-combined-ca-bundle\") pod \"08ef6ec0-ba09-40a2-94d0-a1ddbba8644a\" (UID: \"08ef6ec0-ba09-40a2-94d0-a1ddbba8644a\") " Nov 25 11:56:28 crc kubenswrapper[4706]: I1125 11:56:28.769318 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/08ef6ec0-ba09-40a2-94d0-a1ddbba8644a-db-sync-config-data\") pod \"08ef6ec0-ba09-40a2-94d0-a1ddbba8644a\" (UID: \"08ef6ec0-ba09-40a2-94d0-a1ddbba8644a\") " Nov 25 11:56:28 crc kubenswrapper[4706]: I1125 11:56:28.769537 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zgll\" (UniqueName: \"kubernetes.io/projected/08ef6ec0-ba09-40a2-94d0-a1ddbba8644a-kube-api-access-7zgll\") pod \"08ef6ec0-ba09-40a2-94d0-a1ddbba8644a\" (UID: \"08ef6ec0-ba09-40a2-94d0-a1ddbba8644a\") " Nov 25 11:56:28 crc kubenswrapper[4706]: I1125 11:56:28.777446 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08ef6ec0-ba09-40a2-94d0-a1ddbba8644a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "08ef6ec0-ba09-40a2-94d0-a1ddbba8644a" (UID: "08ef6ec0-ba09-40a2-94d0-a1ddbba8644a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:28 crc kubenswrapper[4706]: I1125 11:56:28.791949 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08ef6ec0-ba09-40a2-94d0-a1ddbba8644a-kube-api-access-7zgll" (OuterVolumeSpecName: "kube-api-access-7zgll") pod "08ef6ec0-ba09-40a2-94d0-a1ddbba8644a" (UID: "08ef6ec0-ba09-40a2-94d0-a1ddbba8644a"). InnerVolumeSpecName "kube-api-access-7zgll". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:56:28 crc kubenswrapper[4706]: I1125 11:56:28.803453 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08ef6ec0-ba09-40a2-94d0-a1ddbba8644a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08ef6ec0-ba09-40a2-94d0-a1ddbba8644a" (UID: "08ef6ec0-ba09-40a2-94d0-a1ddbba8644a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:28 crc kubenswrapper[4706]: I1125 11:56:28.887677 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zgll\" (UniqueName: \"kubernetes.io/projected/08ef6ec0-ba09-40a2-94d0-a1ddbba8644a-kube-api-access-7zgll\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:28 crc kubenswrapper[4706]: I1125 11:56:28.887828 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08ef6ec0-ba09-40a2-94d0-a1ddbba8644a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:28 crc kubenswrapper[4706]: I1125 11:56:28.887840 4706 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/08ef6ec0-ba09-40a2-94d0-a1ddbba8644a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:28 crc kubenswrapper[4706]: I1125 11:56:28.899600 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-v6lvb" Nov 25 11:56:28 crc kubenswrapper[4706]: I1125 11:56:28.899822 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-v6lvb" event={"ID":"08ef6ec0-ba09-40a2-94d0-a1ddbba8644a","Type":"ContainerDied","Data":"023f91948ac374bc83b0ff75394095462e4880da46dd64744048e7c8174c282e"} Nov 25 11:56:28 crc kubenswrapper[4706]: I1125 11:56:28.899943 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="023f91948ac374bc83b0ff75394095462e4880da46dd64744048e7c8174c282e" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.107023 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7fc64dc5d7-m6cqm"] Nov 25 11:56:29 crc kubenswrapper[4706]: E1125 11:56:29.132483 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08ef6ec0-ba09-40a2-94d0-a1ddbba8644a" containerName="barbican-db-sync" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.132519 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="08ef6ec0-ba09-40a2-94d0-a1ddbba8644a" containerName="barbican-db-sync" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.132772 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="08ef6ec0-ba09-40a2-94d0-a1ddbba8644a" containerName="barbican-db-sync" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.133846 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7fc64dc5d7-m6cqm" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.136188 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.136640 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.140182 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6c9c496566-jrgpl"] Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.141836 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6c9c496566-jrgpl" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.143638 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-whr6h" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.147175 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.174968 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7fc64dc5d7-m6cqm"] Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.193970 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6c9c496566-jrgpl"] Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.252974 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-d2sx2"] Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.254907 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-d2sx2" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.271662 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-d2sx2"] Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.295160 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea4caef-6e53-42ac-9202-cf4b05a28041-config-data\") pod \"barbican-keystone-listener-6c9c496566-jrgpl\" (UID: \"2ea4caef-6e53-42ac-9202-cf4b05a28041\") " pod="openstack/barbican-keystone-listener-6c9c496566-jrgpl" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.295205 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea4caef-6e53-42ac-9202-cf4b05a28041-combined-ca-bundle\") pod \"barbican-keystone-listener-6c9c496566-jrgpl\" (UID: \"2ea4caef-6e53-42ac-9202-cf4b05a28041\") " pod="openstack/barbican-keystone-listener-6c9c496566-jrgpl" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.295257 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac9c3625-3935-48b4-abf3-a8330d99152d-combined-ca-bundle\") pod \"barbican-worker-7fc64dc5d7-m6cqm\" (UID: \"ac9c3625-3935-48b4-abf3-a8330d99152d\") " pod="openstack/barbican-worker-7fc64dc5d7-m6cqm" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.295273 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ea4caef-6e53-42ac-9202-cf4b05a28041-config-data-custom\") pod \"barbican-keystone-listener-6c9c496566-jrgpl\" (UID: \"2ea4caef-6e53-42ac-9202-cf4b05a28041\") " pod="openstack/barbican-keystone-listener-6c9c496566-jrgpl" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.295291 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac9c3625-3935-48b4-abf3-a8330d99152d-config-data\") pod \"barbican-worker-7fc64dc5d7-m6cqm\" (UID: \"ac9c3625-3935-48b4-abf3-a8330d99152d\") " pod="openstack/barbican-worker-7fc64dc5d7-m6cqm" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.295330 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac9c3625-3935-48b4-abf3-a8330d99152d-logs\") pod \"barbican-worker-7fc64dc5d7-m6cqm\" (UID: \"ac9c3625-3935-48b4-abf3-a8330d99152d\") " pod="openstack/barbican-worker-7fc64dc5d7-m6cqm" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.295360 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5kp6\" (UniqueName: \"kubernetes.io/projected/ac9c3625-3935-48b4-abf3-a8330d99152d-kube-api-access-c5kp6\") pod \"barbican-worker-7fc64dc5d7-m6cqm\" (UID: \"ac9c3625-3935-48b4-abf3-a8330d99152d\") " pod="openstack/barbican-worker-7fc64dc5d7-m6cqm" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.295389 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea4caef-6e53-42ac-9202-cf4b05a28041-logs\") pod \"barbican-keystone-listener-6c9c496566-jrgpl\" (UID: \"2ea4caef-6e53-42ac-9202-cf4b05a28041\") " pod="openstack/barbican-keystone-listener-6c9c496566-jrgpl" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.295420 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dkmz\" (UniqueName: \"kubernetes.io/projected/2ea4caef-6e53-42ac-9202-cf4b05a28041-kube-api-access-9dkmz\") pod \"barbican-keystone-listener-6c9c496566-jrgpl\" (UID: \"2ea4caef-6e53-42ac-9202-cf4b05a28041\") " pod="openstack/barbican-keystone-listener-6c9c496566-jrgpl" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.295448 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ac9c3625-3935-48b4-abf3-a8330d99152d-config-data-custom\") pod \"barbican-worker-7fc64dc5d7-m6cqm\" (UID: \"ac9c3625-3935-48b4-abf3-a8330d99152d\") " pod="openstack/barbican-worker-7fc64dc5d7-m6cqm" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.369566 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-69546b67d6-65q22"] Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.373747 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-69546b67d6-65q22" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.376899 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.382965 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-69546b67d6-65q22"] Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.397724 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5kp6\" (UniqueName: \"kubernetes.io/projected/ac9c3625-3935-48b4-abf3-a8330d99152d-kube-api-access-c5kp6\") pod \"barbican-worker-7fc64dc5d7-m6cqm\" (UID: \"ac9c3625-3935-48b4-abf3-a8330d99152d\") " pod="openstack/barbican-worker-7fc64dc5d7-m6cqm" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.397791 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea4caef-6e53-42ac-9202-cf4b05a28041-logs\") pod \"barbican-keystone-listener-6c9c496566-jrgpl\" (UID: \"2ea4caef-6e53-42ac-9202-cf4b05a28041\") " pod="openstack/barbican-keystone-listener-6c9c496566-jrgpl" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.397819 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44r97\" (UniqueName: \"kubernetes.io/projected/78bb410c-2722-4620-ad4a-1a9d189d8c92-kube-api-access-44r97\") pod \"dnsmasq-dns-586bdc5f9-d2sx2\" (UID: \"78bb410c-2722-4620-ad4a-1a9d189d8c92\") " pod="openstack/dnsmasq-dns-586bdc5f9-d2sx2" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.397879 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78bb410c-2722-4620-ad4a-1a9d189d8c92-config\") pod \"dnsmasq-dns-586bdc5f9-d2sx2\" (UID: \"78bb410c-2722-4620-ad4a-1a9d189d8c92\") " pod="openstack/dnsmasq-dns-586bdc5f9-d2sx2" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.397898 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78bb410c-2722-4620-ad4a-1a9d189d8c92-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-d2sx2\" (UID: \"78bb410c-2722-4620-ad4a-1a9d189d8c92\") " pod="openstack/dnsmasq-dns-586bdc5f9-d2sx2" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.397943 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dkmz\" (UniqueName: \"kubernetes.io/projected/2ea4caef-6e53-42ac-9202-cf4b05a28041-kube-api-access-9dkmz\") pod \"barbican-keystone-listener-6c9c496566-jrgpl\" (UID: \"2ea4caef-6e53-42ac-9202-cf4b05a28041\") " pod="openstack/barbican-keystone-listener-6c9c496566-jrgpl" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.397978 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ac9c3625-3935-48b4-abf3-a8330d99152d-config-data-custom\") pod \"barbican-worker-7fc64dc5d7-m6cqm\" (UID: \"ac9c3625-3935-48b4-abf3-a8330d99152d\") " pod="openstack/barbican-worker-7fc64dc5d7-m6cqm" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.398059 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea4caef-6e53-42ac-9202-cf4b05a28041-config-data\") pod \"barbican-keystone-listener-6c9c496566-jrgpl\" (UID: \"2ea4caef-6e53-42ac-9202-cf4b05a28041\") " pod="openstack/barbican-keystone-listener-6c9c496566-jrgpl" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.398099 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea4caef-6e53-42ac-9202-cf4b05a28041-combined-ca-bundle\") pod \"barbican-keystone-listener-6c9c496566-jrgpl\" (UID: \"2ea4caef-6e53-42ac-9202-cf4b05a28041\") " pod="openstack/barbican-keystone-listener-6c9c496566-jrgpl" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.398123 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78bb410c-2722-4620-ad4a-1a9d189d8c92-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-d2sx2\" (UID: \"78bb410c-2722-4620-ad4a-1a9d189d8c92\") " pod="openstack/dnsmasq-dns-586bdc5f9-d2sx2" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.398150 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78bb410c-2722-4620-ad4a-1a9d189d8c92-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-d2sx2\" (UID: \"78bb410c-2722-4620-ad4a-1a9d189d8c92\") " pod="openstack/dnsmasq-dns-586bdc5f9-d2sx2" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.398211 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac9c3625-3935-48b4-abf3-a8330d99152d-combined-ca-bundle\") pod \"barbican-worker-7fc64dc5d7-m6cqm\" (UID: \"ac9c3625-3935-48b4-abf3-a8330d99152d\") " pod="openstack/barbican-worker-7fc64dc5d7-m6cqm" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.398255 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac9c3625-3935-48b4-abf3-a8330d99152d-config-data\") pod \"barbican-worker-7fc64dc5d7-m6cqm\" (UID: \"ac9c3625-3935-48b4-abf3-a8330d99152d\") " pod="openstack/barbican-worker-7fc64dc5d7-m6cqm" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.398277 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ea4caef-6e53-42ac-9202-cf4b05a28041-config-data-custom\") pod \"barbican-keystone-listener-6c9c496566-jrgpl\" (UID: \"2ea4caef-6e53-42ac-9202-cf4b05a28041\") " pod="openstack/barbican-keystone-listener-6c9c496566-jrgpl" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.398337 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac9c3625-3935-48b4-abf3-a8330d99152d-logs\") pod \"barbican-worker-7fc64dc5d7-m6cqm\" (UID: \"ac9c3625-3935-48b4-abf3-a8330d99152d\") " pod="openstack/barbican-worker-7fc64dc5d7-m6cqm" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.398367 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/78bb410c-2722-4620-ad4a-1a9d189d8c92-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-d2sx2\" (UID: \"78bb410c-2722-4620-ad4a-1a9d189d8c92\") " pod="openstack/dnsmasq-dns-586bdc5f9-d2sx2" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.399144 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea4caef-6e53-42ac-9202-cf4b05a28041-logs\") pod \"barbican-keystone-listener-6c9c496566-jrgpl\" (UID: \"2ea4caef-6e53-42ac-9202-cf4b05a28041\") " pod="openstack/barbican-keystone-listener-6c9c496566-jrgpl" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.403237 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac9c3625-3935-48b4-abf3-a8330d99152d-logs\") pod \"barbican-worker-7fc64dc5d7-m6cqm\" (UID: \"ac9c3625-3935-48b4-abf3-a8330d99152d\") " pod="openstack/barbican-worker-7fc64dc5d7-m6cqm" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.407363 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac9c3625-3935-48b4-abf3-a8330d99152d-combined-ca-bundle\") pod \"barbican-worker-7fc64dc5d7-m6cqm\" (UID: \"ac9c3625-3935-48b4-abf3-a8330d99152d\") " pod="openstack/barbican-worker-7fc64dc5d7-m6cqm" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.409458 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ea4caef-6e53-42ac-9202-cf4b05a28041-config-data-custom\") pod \"barbican-keystone-listener-6c9c496566-jrgpl\" (UID: \"2ea4caef-6e53-42ac-9202-cf4b05a28041\") " pod="openstack/barbican-keystone-listener-6c9c496566-jrgpl" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.409574 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea4caef-6e53-42ac-9202-cf4b05a28041-config-data\") pod \"barbican-keystone-listener-6c9c496566-jrgpl\" (UID: \"2ea4caef-6e53-42ac-9202-cf4b05a28041\") " pod="openstack/barbican-keystone-listener-6c9c496566-jrgpl" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.410824 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea4caef-6e53-42ac-9202-cf4b05a28041-combined-ca-bundle\") pod \"barbican-keystone-listener-6c9c496566-jrgpl\" (UID: \"2ea4caef-6e53-42ac-9202-cf4b05a28041\") " pod="openstack/barbican-keystone-listener-6c9c496566-jrgpl" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.419218 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ac9c3625-3935-48b4-abf3-a8330d99152d-config-data-custom\") pod \"barbican-worker-7fc64dc5d7-m6cqm\" (UID: \"ac9c3625-3935-48b4-abf3-a8330d99152d\") " pod="openstack/barbican-worker-7fc64dc5d7-m6cqm" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.425978 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac9c3625-3935-48b4-abf3-a8330d99152d-config-data\") pod \"barbican-worker-7fc64dc5d7-m6cqm\" (UID: \"ac9c3625-3935-48b4-abf3-a8330d99152d\") " pod="openstack/barbican-worker-7fc64dc5d7-m6cqm" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.428678 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5kp6\" (UniqueName: \"kubernetes.io/projected/ac9c3625-3935-48b4-abf3-a8330d99152d-kube-api-access-c5kp6\") pod \"barbican-worker-7fc64dc5d7-m6cqm\" (UID: \"ac9c3625-3935-48b4-abf3-a8330d99152d\") " pod="openstack/barbican-worker-7fc64dc5d7-m6cqm" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.435726 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dkmz\" (UniqueName: \"kubernetes.io/projected/2ea4caef-6e53-42ac-9202-cf4b05a28041-kube-api-access-9dkmz\") pod \"barbican-keystone-listener-6c9c496566-jrgpl\" (UID: \"2ea4caef-6e53-42ac-9202-cf4b05a28041\") " pod="openstack/barbican-keystone-listener-6c9c496566-jrgpl" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.471427 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7fc64dc5d7-m6cqm" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.490448 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6c9c496566-jrgpl" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.505725 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/78bb410c-2722-4620-ad4a-1a9d189d8c92-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-d2sx2\" (UID: \"78bb410c-2722-4620-ad4a-1a9d189d8c92\") " pod="openstack/dnsmasq-dns-586bdc5f9-d2sx2" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.505829 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44r97\" (UniqueName: \"kubernetes.io/projected/78bb410c-2722-4620-ad4a-1a9d189d8c92-kube-api-access-44r97\") pod \"dnsmasq-dns-586bdc5f9-d2sx2\" (UID: \"78bb410c-2722-4620-ad4a-1a9d189d8c92\") " pod="openstack/dnsmasq-dns-586bdc5f9-d2sx2" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.505883 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78bb410c-2722-4620-ad4a-1a9d189d8c92-config\") pod \"dnsmasq-dns-586bdc5f9-d2sx2\" (UID: \"78bb410c-2722-4620-ad4a-1a9d189d8c92\") " pod="openstack/dnsmasq-dns-586bdc5f9-d2sx2" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.506022 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fdb06a5-d894-4b1a-ae3c-34c092b4172f-logs\") pod \"barbican-api-69546b67d6-65q22\" (UID: \"4fdb06a5-d894-4b1a-ae3c-34c092b4172f\") " pod="openstack/barbican-api-69546b67d6-65q22" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.506085 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78bb410c-2722-4620-ad4a-1a9d189d8c92-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-d2sx2\" (UID: \"78bb410c-2722-4620-ad4a-1a9d189d8c92\") " pod="openstack/dnsmasq-dns-586bdc5f9-d2sx2" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.506126 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn69v\" (UniqueName: \"kubernetes.io/projected/4fdb06a5-d894-4b1a-ae3c-34c092b4172f-kube-api-access-rn69v\") pod \"barbican-api-69546b67d6-65q22\" (UID: \"4fdb06a5-d894-4b1a-ae3c-34c092b4172f\") " pod="openstack/barbican-api-69546b67d6-65q22" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.506148 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fdb06a5-d894-4b1a-ae3c-34c092b4172f-config-data\") pod \"barbican-api-69546b67d6-65q22\" (UID: \"4fdb06a5-d894-4b1a-ae3c-34c092b4172f\") " pod="openstack/barbican-api-69546b67d6-65q22" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.506204 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fdb06a5-d894-4b1a-ae3c-34c092b4172f-combined-ca-bundle\") pod \"barbican-api-69546b67d6-65q22\" (UID: \"4fdb06a5-d894-4b1a-ae3c-34c092b4172f\") " pod="openstack/barbican-api-69546b67d6-65q22" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.506288 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78bb410c-2722-4620-ad4a-1a9d189d8c92-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-d2sx2\" (UID: \"78bb410c-2722-4620-ad4a-1a9d189d8c92\") " pod="openstack/dnsmasq-dns-586bdc5f9-d2sx2" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.506351 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78bb410c-2722-4620-ad4a-1a9d189d8c92-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-d2sx2\" (UID: \"78bb410c-2722-4620-ad4a-1a9d189d8c92\") " pod="openstack/dnsmasq-dns-586bdc5f9-d2sx2" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.506393 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4fdb06a5-d894-4b1a-ae3c-34c092b4172f-config-data-custom\") pod \"barbican-api-69546b67d6-65q22\" (UID: \"4fdb06a5-d894-4b1a-ae3c-34c092b4172f\") " pod="openstack/barbican-api-69546b67d6-65q22" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.506898 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/78bb410c-2722-4620-ad4a-1a9d189d8c92-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-d2sx2\" (UID: \"78bb410c-2722-4620-ad4a-1a9d189d8c92\") " pod="openstack/dnsmasq-dns-586bdc5f9-d2sx2" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.508366 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78bb410c-2722-4620-ad4a-1a9d189d8c92-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-d2sx2\" (UID: \"78bb410c-2722-4620-ad4a-1a9d189d8c92\") " pod="openstack/dnsmasq-dns-586bdc5f9-d2sx2" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.508781 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78bb410c-2722-4620-ad4a-1a9d189d8c92-config\") pod \"dnsmasq-dns-586bdc5f9-d2sx2\" (UID: \"78bb410c-2722-4620-ad4a-1a9d189d8c92\") " pod="openstack/dnsmasq-dns-586bdc5f9-d2sx2" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.508986 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78bb410c-2722-4620-ad4a-1a9d189d8c92-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-d2sx2\" (UID: \"78bb410c-2722-4620-ad4a-1a9d189d8c92\") " pod="openstack/dnsmasq-dns-586bdc5f9-d2sx2" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.509510 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78bb410c-2722-4620-ad4a-1a9d189d8c92-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-d2sx2\" (UID: \"78bb410c-2722-4620-ad4a-1a9d189d8c92\") " pod="openstack/dnsmasq-dns-586bdc5f9-d2sx2" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.523266 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44r97\" (UniqueName: \"kubernetes.io/projected/78bb410c-2722-4620-ad4a-1a9d189d8c92-kube-api-access-44r97\") pod \"dnsmasq-dns-586bdc5f9-d2sx2\" (UID: \"78bb410c-2722-4620-ad4a-1a9d189d8c92\") " pod="openstack/dnsmasq-dns-586bdc5f9-d2sx2" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.597744 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-d2sx2" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.607730 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fdb06a5-d894-4b1a-ae3c-34c092b4172f-logs\") pod \"barbican-api-69546b67d6-65q22\" (UID: \"4fdb06a5-d894-4b1a-ae3c-34c092b4172f\") " pod="openstack/barbican-api-69546b67d6-65q22" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.607797 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn69v\" (UniqueName: \"kubernetes.io/projected/4fdb06a5-d894-4b1a-ae3c-34c092b4172f-kube-api-access-rn69v\") pod \"barbican-api-69546b67d6-65q22\" (UID: \"4fdb06a5-d894-4b1a-ae3c-34c092b4172f\") " pod="openstack/barbican-api-69546b67d6-65q22" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.607829 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fdb06a5-d894-4b1a-ae3c-34c092b4172f-config-data\") pod \"barbican-api-69546b67d6-65q22\" (UID: \"4fdb06a5-d894-4b1a-ae3c-34c092b4172f\") " pod="openstack/barbican-api-69546b67d6-65q22" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.607872 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fdb06a5-d894-4b1a-ae3c-34c092b4172f-combined-ca-bundle\") pod \"barbican-api-69546b67d6-65q22\" (UID: \"4fdb06a5-d894-4b1a-ae3c-34c092b4172f\") " pod="openstack/barbican-api-69546b67d6-65q22" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.607948 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4fdb06a5-d894-4b1a-ae3c-34c092b4172f-config-data-custom\") pod \"barbican-api-69546b67d6-65q22\" (UID: \"4fdb06a5-d894-4b1a-ae3c-34c092b4172f\") " pod="openstack/barbican-api-69546b67d6-65q22" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.609178 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fdb06a5-d894-4b1a-ae3c-34c092b4172f-logs\") pod \"barbican-api-69546b67d6-65q22\" (UID: \"4fdb06a5-d894-4b1a-ae3c-34c092b4172f\") " pod="openstack/barbican-api-69546b67d6-65q22" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.614847 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4fdb06a5-d894-4b1a-ae3c-34c092b4172f-config-data-custom\") pod \"barbican-api-69546b67d6-65q22\" (UID: \"4fdb06a5-d894-4b1a-ae3c-34c092b4172f\") " pod="openstack/barbican-api-69546b67d6-65q22" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.618249 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fdb06a5-d894-4b1a-ae3c-34c092b4172f-combined-ca-bundle\") pod \"barbican-api-69546b67d6-65q22\" (UID: \"4fdb06a5-d894-4b1a-ae3c-34c092b4172f\") " pod="openstack/barbican-api-69546b67d6-65q22" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.619410 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fdb06a5-d894-4b1a-ae3c-34c092b4172f-config-data\") pod \"barbican-api-69546b67d6-65q22\" (UID: \"4fdb06a5-d894-4b1a-ae3c-34c092b4172f\") " pod="openstack/barbican-api-69546b67d6-65q22" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.631899 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn69v\" (UniqueName: \"kubernetes.io/projected/4fdb06a5-d894-4b1a-ae3c-34c092b4172f-kube-api-access-rn69v\") pod \"barbican-api-69546b67d6-65q22\" (UID: \"4fdb06a5-d894-4b1a-ae3c-34c092b4172f\") " pod="openstack/barbican-api-69546b67d6-65q22" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.709148 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-69546b67d6-65q22" Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.911768 4706 generic.go:334] "Generic (PLEG): container finished" podID="424f303d-41b7-4fd6-be4a-017148ed95da" containerID="797c773a68a2cefa511a2d83c42ec2cf0c6e8966351b19ccb7c9050e4a68b766" exitCode=0 Nov 25 11:56:29 crc kubenswrapper[4706]: I1125 11:56:29.911812 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-fd7sf" event={"ID":"424f303d-41b7-4fd6-be4a-017148ed95da","Type":"ContainerDied","Data":"797c773a68a2cefa511a2d83c42ec2cf0c6e8966351b19ccb7c9050e4a68b766"} Nov 25 11:56:30 crc kubenswrapper[4706]: I1125 11:56:30.717044 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hdbbw" Nov 25 11:56:30 crc kubenswrapper[4706]: I1125 11:56:30.825587 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6brcv\" (UniqueName: \"kubernetes.io/projected/27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf-kube-api-access-6brcv\") pod \"27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf\" (UID: \"27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf\") " Nov 25 11:56:30 crc kubenswrapper[4706]: I1125 11:56:30.825663 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf-combined-ca-bundle\") pod \"27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf\" (UID: \"27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf\") " Nov 25 11:56:30 crc kubenswrapper[4706]: I1125 11:56:30.825872 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf-config\") pod \"27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf\" (UID: \"27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf\") " Nov 25 11:56:30 crc kubenswrapper[4706]: I1125 11:56:30.837983 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf-kube-api-access-6brcv" (OuterVolumeSpecName: "kube-api-access-6brcv") pod "27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf" (UID: "27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf"). InnerVolumeSpecName "kube-api-access-6brcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:56:30 crc kubenswrapper[4706]: I1125 11:56:30.883981 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf" (UID: "27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:30 crc kubenswrapper[4706]: I1125 11:56:30.892469 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf-config" (OuterVolumeSpecName: "config") pod "27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf" (UID: "27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:30 crc kubenswrapper[4706]: I1125 11:56:30.930632 4706 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:30 crc kubenswrapper[4706]: I1125 11:56:30.930666 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6brcv\" (UniqueName: \"kubernetes.io/projected/27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf-kube-api-access-6brcv\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:30 crc kubenswrapper[4706]: I1125 11:56:30.930690 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:30 crc kubenswrapper[4706]: I1125 11:56:30.937597 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hdbbw" Nov 25 11:56:30 crc kubenswrapper[4706]: I1125 11:56:30.937905 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hdbbw" event={"ID":"27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf","Type":"ContainerDied","Data":"24da31dada44e6f20e6e6f10fd7b5aa6a25b5647da33550051402225dcffd3bb"} Nov 25 11:56:30 crc kubenswrapper[4706]: I1125 11:56:30.937978 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24da31dada44e6f20e6e6f10fd7b5aa6a25b5647da33550051402225dcffd3bb" Nov 25 11:56:31 crc kubenswrapper[4706]: I1125 11:56:31.128153 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 11:56:31 crc kubenswrapper[4706]: I1125 11:56:31.129211 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 11:56:31 crc kubenswrapper[4706]: I1125 11:56:31.129455 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" Nov 25 11:56:31 crc kubenswrapper[4706]: I1125 11:56:31.130369 4706 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"11a32543eabb96f028f5772afd04ba615397c2a8e9b4fc94ea299c44af45edfc"} pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 11:56:31 crc kubenswrapper[4706]: I1125 11:56:31.132388 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" containerID="cri-o://11a32543eabb96f028f5772afd04ba615397c2a8e9b4fc94ea299c44af45edfc" gracePeriod=600 Nov 25 11:56:31 crc kubenswrapper[4706]: E1125 11:56:31.202728 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="db4e7aed-28ec-49cd-8f0b-e01df112bf54" Nov 25 11:56:31 crc kubenswrapper[4706]: I1125 11:56:31.317028 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-fd7sf" Nov 25 11:56:31 crc kubenswrapper[4706]: I1125 11:56:31.354768 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/424f303d-41b7-4fd6-be4a-017148ed95da-etc-machine-id\") pod \"424f303d-41b7-4fd6-be4a-017148ed95da\" (UID: \"424f303d-41b7-4fd6-be4a-017148ed95da\") " Nov 25 11:56:31 crc kubenswrapper[4706]: I1125 11:56:31.354849 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/424f303d-41b7-4fd6-be4a-017148ed95da-combined-ca-bundle\") pod \"424f303d-41b7-4fd6-be4a-017148ed95da\" (UID: \"424f303d-41b7-4fd6-be4a-017148ed95da\") " Nov 25 11:56:31 crc kubenswrapper[4706]: I1125 11:56:31.354972 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dkcz\" (UniqueName: \"kubernetes.io/projected/424f303d-41b7-4fd6-be4a-017148ed95da-kube-api-access-2dkcz\") pod \"424f303d-41b7-4fd6-be4a-017148ed95da\" (UID: \"424f303d-41b7-4fd6-be4a-017148ed95da\") " Nov 25 11:56:31 crc kubenswrapper[4706]: I1125 11:56:31.355023 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/424f303d-41b7-4fd6-be4a-017148ed95da-scripts\") pod \"424f303d-41b7-4fd6-be4a-017148ed95da\" (UID: \"424f303d-41b7-4fd6-be4a-017148ed95da\") " Nov 25 11:56:31 crc kubenswrapper[4706]: I1125 11:56:31.355067 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/424f303d-41b7-4fd6-be4a-017148ed95da-config-data\") pod \"424f303d-41b7-4fd6-be4a-017148ed95da\" (UID: \"424f303d-41b7-4fd6-be4a-017148ed95da\") " Nov 25 11:56:31 crc kubenswrapper[4706]: I1125 11:56:31.355120 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/424f303d-41b7-4fd6-be4a-017148ed95da-db-sync-config-data\") pod \"424f303d-41b7-4fd6-be4a-017148ed95da\" (UID: \"424f303d-41b7-4fd6-be4a-017148ed95da\") " Nov 25 11:56:31 crc kubenswrapper[4706]: I1125 11:56:31.356238 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/424f303d-41b7-4fd6-be4a-017148ed95da-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "424f303d-41b7-4fd6-be4a-017148ed95da" (UID: "424f303d-41b7-4fd6-be4a-017148ed95da"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 11:56:31 crc kubenswrapper[4706]: I1125 11:56:31.369448 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/424f303d-41b7-4fd6-be4a-017148ed95da-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "424f303d-41b7-4fd6-be4a-017148ed95da" (UID: "424f303d-41b7-4fd6-be4a-017148ed95da"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:31 crc kubenswrapper[4706]: I1125 11:56:31.369733 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/424f303d-41b7-4fd6-be4a-017148ed95da-kube-api-access-2dkcz" (OuterVolumeSpecName: "kube-api-access-2dkcz") pod "424f303d-41b7-4fd6-be4a-017148ed95da" (UID: "424f303d-41b7-4fd6-be4a-017148ed95da"). InnerVolumeSpecName "kube-api-access-2dkcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:56:31 crc kubenswrapper[4706]: I1125 11:56:31.370405 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/424f303d-41b7-4fd6-be4a-017148ed95da-scripts" (OuterVolumeSpecName: "scripts") pod "424f303d-41b7-4fd6-be4a-017148ed95da" (UID: "424f303d-41b7-4fd6-be4a-017148ed95da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:31 crc kubenswrapper[4706]: I1125 11:56:31.397115 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/424f303d-41b7-4fd6-be4a-017148ed95da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "424f303d-41b7-4fd6-be4a-017148ed95da" (UID: "424f303d-41b7-4fd6-be4a-017148ed95da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:31 crc kubenswrapper[4706]: I1125 11:56:31.418994 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-69546b67d6-65q22"] Nov 25 11:56:31 crc kubenswrapper[4706]: I1125 11:56:31.424289 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/424f303d-41b7-4fd6-be4a-017148ed95da-config-data" (OuterVolumeSpecName: "config-data") pod "424f303d-41b7-4fd6-be4a-017148ed95da" (UID: "424f303d-41b7-4fd6-be4a-017148ed95da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:31 crc kubenswrapper[4706]: I1125 11:56:31.427989 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6c9c496566-jrgpl"] Nov 25 11:56:31 crc kubenswrapper[4706]: I1125 11:56:31.478625 4706 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/424f303d-41b7-4fd6-be4a-017148ed95da-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:31 crc kubenswrapper[4706]: I1125 11:56:31.478655 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/424f303d-41b7-4fd6-be4a-017148ed95da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:31 crc kubenswrapper[4706]: I1125 11:56:31.478665 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dkcz\" (UniqueName: \"kubernetes.io/projected/424f303d-41b7-4fd6-be4a-017148ed95da-kube-api-access-2dkcz\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:31 crc kubenswrapper[4706]: I1125 11:56:31.478674 4706 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/424f303d-41b7-4fd6-be4a-017148ed95da-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:31 crc kubenswrapper[4706]: I1125 11:56:31.478682 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/424f303d-41b7-4fd6-be4a-017148ed95da-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:31 crc kubenswrapper[4706]: I1125 11:56:31.478690 4706 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/424f303d-41b7-4fd6-be4a-017148ed95da-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:31 crc kubenswrapper[4706]: I1125 11:56:31.607198 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7fc64dc5d7-m6cqm"] Nov 25 11:56:31 crc kubenswrapper[4706]: I1125 11:56:31.624205 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-d2sx2"] Nov 25 11:56:31 crc kubenswrapper[4706]: I1125 11:56:31.974815 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-d2sx2"] Nov 25 11:56:31 crc kubenswrapper[4706]: I1125 11:56:31.983359 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6c9c496566-jrgpl" event={"ID":"2ea4caef-6e53-42ac-9202-cf4b05a28041","Type":"ContainerStarted","Data":"23e017d65af7f0ac43ee1a360b8dbfabd67bb70bfc0714f1e653a168e25f6ff6"} Nov 25 11:56:31 crc kubenswrapper[4706]: I1125 11:56:31.994206 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-9mz7s"] Nov 25 11:56:31 crc kubenswrapper[4706]: E1125 11:56:31.994677 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf" containerName="neutron-db-sync" Nov 25 11:56:31 crc kubenswrapper[4706]: I1125 11:56:31.994701 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf" containerName="neutron-db-sync" Nov 25 11:56:31 crc kubenswrapper[4706]: E1125 11:56:31.994725 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="424f303d-41b7-4fd6-be4a-017148ed95da" containerName="cinder-db-sync" Nov 25 11:56:31 crc kubenswrapper[4706]: I1125 11:56:31.994735 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="424f303d-41b7-4fd6-be4a-017148ed95da" containerName="cinder-db-sync" Nov 25 11:56:31 crc kubenswrapper[4706]: I1125 11:56:31.994960 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="424f303d-41b7-4fd6-be4a-017148ed95da" containerName="cinder-db-sync" Nov 25 11:56:31 crc kubenswrapper[4706]: I1125 11:56:31.994997 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf" containerName="neutron-db-sync" Nov 25 11:56:31 crc kubenswrapper[4706]: I1125 11:56:31.997354 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-9mz7s" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.019560 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-fd7sf" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.019605 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-fd7sf" event={"ID":"424f303d-41b7-4fd6-be4a-017148ed95da","Type":"ContainerDied","Data":"15a1b4a846ce3378a6f418aa01a64b670ecf60b1f4848afd4675e03bcaad9ae8"} Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.019652 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15a1b4a846ce3378a6f418aa01a64b670ecf60b1f4848afd4675e03bcaad9ae8" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.029346 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-d2sx2" event={"ID":"78bb410c-2722-4620-ad4a-1a9d189d8c92","Type":"ContainerStarted","Data":"13649aa229d49bdb1a3515188d6688aa1c989b7fd8cac5473bdcea872dd71baf"} Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.029494 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-586bdc5f9-d2sx2" podUID="78bb410c-2722-4620-ad4a-1a9d189d8c92" containerName="init" containerID="cri-o://df15a61b18c182628568be18e562837544b3b13b406ad2936113a9d40d73e498" gracePeriod=10 Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.037339 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db4e7aed-28ec-49cd-8f0b-e01df112bf54","Type":"ContainerStarted","Data":"dbc374ca3fd943ed0b6a3b06b2a79b442e6b233bc5b02afea8999bcc193ee4f2"} Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.037525 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="db4e7aed-28ec-49cd-8f0b-e01df112bf54" containerName="ceilometer-notification-agent" containerID="cri-o://8fc86a2c1073d99eefaa9c298eca352f7130fb64903b505f7a478749a7d6acc1" gracePeriod=30 Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.037620 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.037670 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="db4e7aed-28ec-49cd-8f0b-e01df112bf54" containerName="proxy-httpd" containerID="cri-o://dbc374ca3fd943ed0b6a3b06b2a79b442e6b233bc5b02afea8999bcc193ee4f2" gracePeriod=30 Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.037719 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="db4e7aed-28ec-49cd-8f0b-e01df112bf54" containerName="sg-core" containerID="cri-o://1b0f99a9c2d7134db91d0dc3c0f7d3e579a75185b06822b489e2cf538487e522" gracePeriod=30 Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.054595 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69546b67d6-65q22" event={"ID":"4fdb06a5-d894-4b1a-ae3c-34c092b4172f","Type":"ContainerStarted","Data":"a7d352f8c4acb86b308a76af9b28adf5569dc024b09a5943335f004141888d8e"} Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.054660 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-9mz7s"] Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.069961 4706 generic.go:334] "Generic (PLEG): container finished" podID="0930887a-320c-4506-8c9c-f94d6d64516a" containerID="11a32543eabb96f028f5772afd04ba615397c2a8e9b4fc94ea299c44af45edfc" exitCode=0 Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.070033 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" event={"ID":"0930887a-320c-4506-8c9c-f94d6d64516a","Type":"ContainerDied","Data":"11a32543eabb96f028f5772afd04ba615397c2a8e9b4fc94ea299c44af45edfc"} Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.070061 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" event={"ID":"0930887a-320c-4506-8c9c-f94d6d64516a","Type":"ContainerStarted","Data":"f685f0473c39af27d83f9b8acef23bb16392c6964cab02224e6cb60acc8e8ad1"} Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.070084 4706 scope.go:117] "RemoveContainer" containerID="fdd2404bf73191f443033ee21a4507eceb1c00713641b2459642f00fc3611d21" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.100661 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2228dc73-369b-4b00-987a-955d0d1ea8c8-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-9mz7s\" (UID: \"2228dc73-369b-4b00-987a-955d0d1ea8c8\") " pod="openstack/dnsmasq-dns-85ff748b95-9mz7s" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.100709 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2228dc73-369b-4b00-987a-955d0d1ea8c8-config\") pod \"dnsmasq-dns-85ff748b95-9mz7s\" (UID: \"2228dc73-369b-4b00-987a-955d0d1ea8c8\") " pod="openstack/dnsmasq-dns-85ff748b95-9mz7s" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.100732 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2228dc73-369b-4b00-987a-955d0d1ea8c8-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-9mz7s\" (UID: \"2228dc73-369b-4b00-987a-955d0d1ea8c8\") " pod="openstack/dnsmasq-dns-85ff748b95-9mz7s" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.100890 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2228dc73-369b-4b00-987a-955d0d1ea8c8-dns-svc\") pod \"dnsmasq-dns-85ff748b95-9mz7s\" (UID: \"2228dc73-369b-4b00-987a-955d0d1ea8c8\") " pod="openstack/dnsmasq-dns-85ff748b95-9mz7s" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.100936 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2228dc73-369b-4b00-987a-955d0d1ea8c8-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-9mz7s\" (UID: \"2228dc73-369b-4b00-987a-955d0d1ea8c8\") " pod="openstack/dnsmasq-dns-85ff748b95-9mz7s" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.100975 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs8gg\" (UniqueName: \"kubernetes.io/projected/2228dc73-369b-4b00-987a-955d0d1ea8c8-kube-api-access-xs8gg\") pod \"dnsmasq-dns-85ff748b95-9mz7s\" (UID: \"2228dc73-369b-4b00-987a-955d0d1ea8c8\") " pod="openstack/dnsmasq-dns-85ff748b95-9mz7s" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.126424 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7fc64dc5d7-m6cqm" event={"ID":"ac9c3625-3935-48b4-abf3-a8330d99152d","Type":"ContainerStarted","Data":"e2c6cec7ae5268db274ec0564a85ac0dc2744a58aea2dbbc935874f141c9ed22"} Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.157631 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-779dc76bb8-fwppw"] Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.182451 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-779dc76bb8-fwppw" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.184728 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.187865 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.189891 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.189969 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-5bbq6" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.205724 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2228dc73-369b-4b00-987a-955d0d1ea8c8-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-9mz7s\" (UID: \"2228dc73-369b-4b00-987a-955d0d1ea8c8\") " pod="openstack/dnsmasq-dns-85ff748b95-9mz7s" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.205785 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2228dc73-369b-4b00-987a-955d0d1ea8c8-config\") pod \"dnsmasq-dns-85ff748b95-9mz7s\" (UID: \"2228dc73-369b-4b00-987a-955d0d1ea8c8\") " pod="openstack/dnsmasq-dns-85ff748b95-9mz7s" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.205823 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2228dc73-369b-4b00-987a-955d0d1ea8c8-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-9mz7s\" (UID: \"2228dc73-369b-4b00-987a-955d0d1ea8c8\") " pod="openstack/dnsmasq-dns-85ff748b95-9mz7s" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.205979 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2228dc73-369b-4b00-987a-955d0d1ea8c8-dns-svc\") pod \"dnsmasq-dns-85ff748b95-9mz7s\" (UID: \"2228dc73-369b-4b00-987a-955d0d1ea8c8\") " pod="openstack/dnsmasq-dns-85ff748b95-9mz7s" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.206025 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2228dc73-369b-4b00-987a-955d0d1ea8c8-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-9mz7s\" (UID: \"2228dc73-369b-4b00-987a-955d0d1ea8c8\") " pod="openstack/dnsmasq-dns-85ff748b95-9mz7s" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.206069 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs8gg\" (UniqueName: \"kubernetes.io/projected/2228dc73-369b-4b00-987a-955d0d1ea8c8-kube-api-access-xs8gg\") pod \"dnsmasq-dns-85ff748b95-9mz7s\" (UID: \"2228dc73-369b-4b00-987a-955d0d1ea8c8\") " pod="openstack/dnsmasq-dns-85ff748b95-9mz7s" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.210060 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2228dc73-369b-4b00-987a-955d0d1ea8c8-dns-svc\") pod \"dnsmasq-dns-85ff748b95-9mz7s\" (UID: \"2228dc73-369b-4b00-987a-955d0d1ea8c8\") " pod="openstack/dnsmasq-dns-85ff748b95-9mz7s" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.210679 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2228dc73-369b-4b00-987a-955d0d1ea8c8-config\") pod \"dnsmasq-dns-85ff748b95-9mz7s\" (UID: \"2228dc73-369b-4b00-987a-955d0d1ea8c8\") " pod="openstack/dnsmasq-dns-85ff748b95-9mz7s" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.211214 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2228dc73-369b-4b00-987a-955d0d1ea8c8-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-9mz7s\" (UID: \"2228dc73-369b-4b00-987a-955d0d1ea8c8\") " pod="openstack/dnsmasq-dns-85ff748b95-9mz7s" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.212245 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2228dc73-369b-4b00-987a-955d0d1ea8c8-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-9mz7s\" (UID: \"2228dc73-369b-4b00-987a-955d0d1ea8c8\") " pod="openstack/dnsmasq-dns-85ff748b95-9mz7s" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.214081 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2228dc73-369b-4b00-987a-955d0d1ea8c8-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-9mz7s\" (UID: \"2228dc73-369b-4b00-987a-955d0d1ea8c8\") " pod="openstack/dnsmasq-dns-85ff748b95-9mz7s" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.279817 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-779dc76bb8-fwppw"] Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.314417 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b4x8\" (UniqueName: \"kubernetes.io/projected/6d2de783-5f62-4740-87d8-cef1b4941953-kube-api-access-4b4x8\") pod \"neutron-779dc76bb8-fwppw\" (UID: \"6d2de783-5f62-4740-87d8-cef1b4941953\") " pod="openstack/neutron-779dc76bb8-fwppw" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.314477 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d2de783-5f62-4740-87d8-cef1b4941953-ovndb-tls-certs\") pod \"neutron-779dc76bb8-fwppw\" (UID: \"6d2de783-5f62-4740-87d8-cef1b4941953\") " pod="openstack/neutron-779dc76bb8-fwppw" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.314552 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d2de783-5f62-4740-87d8-cef1b4941953-combined-ca-bundle\") pod \"neutron-779dc76bb8-fwppw\" (UID: \"6d2de783-5f62-4740-87d8-cef1b4941953\") " pod="openstack/neutron-779dc76bb8-fwppw" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.314611 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6d2de783-5f62-4740-87d8-cef1b4941953-httpd-config\") pod \"neutron-779dc76bb8-fwppw\" (UID: \"6d2de783-5f62-4740-87d8-cef1b4941953\") " pod="openstack/neutron-779dc76bb8-fwppw" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.314650 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6d2de783-5f62-4740-87d8-cef1b4941953-config\") pod \"neutron-779dc76bb8-fwppw\" (UID: \"6d2de783-5f62-4740-87d8-cef1b4941953\") " pod="openstack/neutron-779dc76bb8-fwppw" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.324509 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs8gg\" (UniqueName: \"kubernetes.io/projected/2228dc73-369b-4b00-987a-955d0d1ea8c8-kube-api-access-xs8gg\") pod \"dnsmasq-dns-85ff748b95-9mz7s\" (UID: \"2228dc73-369b-4b00-987a-955d0d1ea8c8\") " pod="openstack/dnsmasq-dns-85ff748b95-9mz7s" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.382820 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.384285 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.399491 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-n4npr" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.399650 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.399785 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.406620 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.424789 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b4x8\" (UniqueName: \"kubernetes.io/projected/6d2de783-5f62-4740-87d8-cef1b4941953-kube-api-access-4b4x8\") pod \"neutron-779dc76bb8-fwppw\" (UID: \"6d2de783-5f62-4740-87d8-cef1b4941953\") " pod="openstack/neutron-779dc76bb8-fwppw" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.425241 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d2de783-5f62-4740-87d8-cef1b4941953-ovndb-tls-certs\") pod \"neutron-779dc76bb8-fwppw\" (UID: \"6d2de783-5f62-4740-87d8-cef1b4941953\") " pod="openstack/neutron-779dc76bb8-fwppw" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.425428 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d2de783-5f62-4740-87d8-cef1b4941953-combined-ca-bundle\") pod \"neutron-779dc76bb8-fwppw\" (UID: \"6d2de783-5f62-4740-87d8-cef1b4941953\") " pod="openstack/neutron-779dc76bb8-fwppw" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.427436 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6d2de783-5f62-4740-87d8-cef1b4941953-httpd-config\") pod \"neutron-779dc76bb8-fwppw\" (UID: \"6d2de783-5f62-4740-87d8-cef1b4941953\") " pod="openstack/neutron-779dc76bb8-fwppw" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.427622 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6d2de783-5f62-4740-87d8-cef1b4941953-config\") pod \"neutron-779dc76bb8-fwppw\" (UID: \"6d2de783-5f62-4740-87d8-cef1b4941953\") " pod="openstack/neutron-779dc76bb8-fwppw" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.449489 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d2de783-5f62-4740-87d8-cef1b4941953-ovndb-tls-certs\") pod \"neutron-779dc76bb8-fwppw\" (UID: \"6d2de783-5f62-4740-87d8-cef1b4941953\") " pod="openstack/neutron-779dc76bb8-fwppw" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.449964 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6d2de783-5f62-4740-87d8-cef1b4941953-httpd-config\") pod \"neutron-779dc76bb8-fwppw\" (UID: \"6d2de783-5f62-4740-87d8-cef1b4941953\") " pod="openstack/neutron-779dc76bb8-fwppw" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.454646 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6d2de783-5f62-4740-87d8-cef1b4941953-config\") pod \"neutron-779dc76bb8-fwppw\" (UID: \"6d2de783-5f62-4740-87d8-cef1b4941953\") " pod="openstack/neutron-779dc76bb8-fwppw" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.459008 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d2de783-5f62-4740-87d8-cef1b4941953-combined-ca-bundle\") pod \"neutron-779dc76bb8-fwppw\" (UID: \"6d2de783-5f62-4740-87d8-cef1b4941953\") " pod="openstack/neutron-779dc76bb8-fwppw" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.462863 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b4x8\" (UniqueName: \"kubernetes.io/projected/6d2de783-5f62-4740-87d8-cef1b4941953-kube-api-access-4b4x8\") pod \"neutron-779dc76bb8-fwppw\" (UID: \"6d2de783-5f62-4740-87d8-cef1b4941953\") " pod="openstack/neutron-779dc76bb8-fwppw" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.475167 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.488521 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-779dc76bb8-fwppw" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.521105 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-9mz7s"] Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.521850 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-9mz7s" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.532578 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52550d3a-83c6-44fd-87bd-e14b2b6645d9-scripts\") pod \"cinder-scheduler-0\" (UID: \"52550d3a-83c6-44fd-87bd-e14b2b6645d9\") " pod="openstack/cinder-scheduler-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.532675 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52550d3a-83c6-44fd-87bd-e14b2b6645d9-config-data\") pod \"cinder-scheduler-0\" (UID: \"52550d3a-83c6-44fd-87bd-e14b2b6645d9\") " pod="openstack/cinder-scheduler-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.533227 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52550d3a-83c6-44fd-87bd-e14b2b6645d9-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"52550d3a-83c6-44fd-87bd-e14b2b6645d9\") " pod="openstack/cinder-scheduler-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.533257 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv6rp\" (UniqueName: \"kubernetes.io/projected/52550d3a-83c6-44fd-87bd-e14b2b6645d9-kube-api-access-zv6rp\") pod \"cinder-scheduler-0\" (UID: \"52550d3a-83c6-44fd-87bd-e14b2b6645d9\") " pod="openstack/cinder-scheduler-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.533360 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/52550d3a-83c6-44fd-87bd-e14b2b6645d9-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"52550d3a-83c6-44fd-87bd-e14b2b6645d9\") " pod="openstack/cinder-scheduler-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.533387 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52550d3a-83c6-44fd-87bd-e14b2b6645d9-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"52550d3a-83c6-44fd-87bd-e14b2b6645d9\") " pod="openstack/cinder-scheduler-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.541360 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-kv96j"] Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.542955 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-kv96j" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.570506 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-kv96j"] Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.605546 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-85c7db76fd-f64jq"] Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.610775 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.612668 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85c7db76fd-f64jq" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.613920 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.616423 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.617546 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.618124 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.632854 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-85c7db76fd-f64jq"] Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.635193 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52550d3a-83c6-44fd-87bd-e14b2b6645d9-config-data\") pod \"cinder-scheduler-0\" (UID: \"52550d3a-83c6-44fd-87bd-e14b2b6645d9\") " pod="openstack/cinder-scheduler-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.635233 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52550d3a-83c6-44fd-87bd-e14b2b6645d9-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"52550d3a-83c6-44fd-87bd-e14b2b6645d9\") " pod="openstack/cinder-scheduler-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.635252 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv6rp\" (UniqueName: \"kubernetes.io/projected/52550d3a-83c6-44fd-87bd-e14b2b6645d9-kube-api-access-zv6rp\") pod \"cinder-scheduler-0\" (UID: \"52550d3a-83c6-44fd-87bd-e14b2b6645d9\") " pod="openstack/cinder-scheduler-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.635279 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60e3d8af-641e-4c2c-b105-3d1b4b98904f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"60e3d8af-641e-4c2c-b105-3d1b4b98904f\") " pod="openstack/cinder-api-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.635320 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d560e53-d5ef-4b6b-af31-d1b5856dbf47-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-kv96j\" (UID: \"9d560e53-d5ef-4b6b-af31-d1b5856dbf47\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kv96j" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.635337 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d560e53-d5ef-4b6b-af31-d1b5856dbf47-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-kv96j\" (UID: \"9d560e53-d5ef-4b6b-af31-d1b5856dbf47\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kv96j" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.635354 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/500c37cc-45dd-444f-a630-19356ac8d1e3-internal-tls-certs\") pod \"barbican-api-85c7db76fd-f64jq\" (UID: \"500c37cc-45dd-444f-a630-19356ac8d1e3\") " pod="openstack/barbican-api-85c7db76fd-f64jq" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.635369 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60e3d8af-641e-4c2c-b105-3d1b4b98904f-logs\") pod \"cinder-api-0\" (UID: \"60e3d8af-641e-4c2c-b105-3d1b4b98904f\") " pod="openstack/cinder-api-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.635393 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/52550d3a-83c6-44fd-87bd-e14b2b6645d9-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"52550d3a-83c6-44fd-87bd-e14b2b6645d9\") " pod="openstack/cinder-scheduler-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.635411 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60e3d8af-641e-4c2c-b105-3d1b4b98904f-scripts\") pod \"cinder-api-0\" (UID: \"60e3d8af-641e-4c2c-b105-3d1b4b98904f\") " pod="openstack/cinder-api-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.635428 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52550d3a-83c6-44fd-87bd-e14b2b6645d9-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"52550d3a-83c6-44fd-87bd-e14b2b6645d9\") " pod="openstack/cinder-scheduler-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.635452 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/500c37cc-45dd-444f-a630-19356ac8d1e3-logs\") pod \"barbican-api-85c7db76fd-f64jq\" (UID: \"500c37cc-45dd-444f-a630-19356ac8d1e3\") " pod="openstack/barbican-api-85c7db76fd-f64jq" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.635503 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/60e3d8af-641e-4c2c-b105-3d1b4b98904f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"60e3d8af-641e-4c2c-b105-3d1b4b98904f\") " pod="openstack/cinder-api-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.635524 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/500c37cc-45dd-444f-a630-19356ac8d1e3-config-data-custom\") pod \"barbican-api-85c7db76fd-f64jq\" (UID: \"500c37cc-45dd-444f-a630-19356ac8d1e3\") " pod="openstack/barbican-api-85c7db76fd-f64jq" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.635539 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/500c37cc-45dd-444f-a630-19356ac8d1e3-public-tls-certs\") pod \"barbican-api-85c7db76fd-f64jq\" (UID: \"500c37cc-45dd-444f-a630-19356ac8d1e3\") " pod="openstack/barbican-api-85c7db76fd-f64jq" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.635556 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d560e53-d5ef-4b6b-af31-d1b5856dbf47-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-kv96j\" (UID: \"9d560e53-d5ef-4b6b-af31-d1b5856dbf47\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kv96j" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.635572 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s57fz\" (UniqueName: \"kubernetes.io/projected/500c37cc-45dd-444f-a630-19356ac8d1e3-kube-api-access-s57fz\") pod \"barbican-api-85c7db76fd-f64jq\" (UID: \"500c37cc-45dd-444f-a630-19356ac8d1e3\") " pod="openstack/barbican-api-85c7db76fd-f64jq" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.635587 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn8xf\" (UniqueName: \"kubernetes.io/projected/9d560e53-d5ef-4b6b-af31-d1b5856dbf47-kube-api-access-hn8xf\") pod \"dnsmasq-dns-5c9776ccc5-kv96j\" (UID: \"9d560e53-d5ef-4b6b-af31-d1b5856dbf47\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kv96j" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.635605 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d560e53-d5ef-4b6b-af31-d1b5856dbf47-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-kv96j\" (UID: \"9d560e53-d5ef-4b6b-af31-d1b5856dbf47\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kv96j" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.635627 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52550d3a-83c6-44fd-87bd-e14b2b6645d9-scripts\") pod \"cinder-scheduler-0\" (UID: \"52550d3a-83c6-44fd-87bd-e14b2b6645d9\") " pod="openstack/cinder-scheduler-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.635650 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/500c37cc-45dd-444f-a630-19356ac8d1e3-combined-ca-bundle\") pod \"barbican-api-85c7db76fd-f64jq\" (UID: \"500c37cc-45dd-444f-a630-19356ac8d1e3\") " pod="openstack/barbican-api-85c7db76fd-f64jq" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.635672 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/60e3d8af-641e-4c2c-b105-3d1b4b98904f-config-data-custom\") pod \"cinder-api-0\" (UID: \"60e3d8af-641e-4c2c-b105-3d1b4b98904f\") " pod="openstack/cinder-api-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.635692 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d560e53-d5ef-4b6b-af31-d1b5856dbf47-config\") pod \"dnsmasq-dns-5c9776ccc5-kv96j\" (UID: \"9d560e53-d5ef-4b6b-af31-d1b5856dbf47\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kv96j" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.635710 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/500c37cc-45dd-444f-a630-19356ac8d1e3-config-data\") pod \"barbican-api-85c7db76fd-f64jq\" (UID: \"500c37cc-45dd-444f-a630-19356ac8d1e3\") " pod="openstack/barbican-api-85c7db76fd-f64jq" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.635727 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60e3d8af-641e-4c2c-b105-3d1b4b98904f-config-data\") pod \"cinder-api-0\" (UID: \"60e3d8af-641e-4c2c-b105-3d1b4b98904f\") " pod="openstack/cinder-api-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.635742 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9h4r\" (UniqueName: \"kubernetes.io/projected/60e3d8af-641e-4c2c-b105-3d1b4b98904f-kube-api-access-q9h4r\") pod \"cinder-api-0\" (UID: \"60e3d8af-641e-4c2c-b105-3d1b4b98904f\") " pod="openstack/cinder-api-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.636949 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/52550d3a-83c6-44fd-87bd-e14b2b6645d9-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"52550d3a-83c6-44fd-87bd-e14b2b6645d9\") " pod="openstack/cinder-scheduler-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.660617 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.737220 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/500c37cc-45dd-444f-a630-19356ac8d1e3-logs\") pod \"barbican-api-85c7db76fd-f64jq\" (UID: \"500c37cc-45dd-444f-a630-19356ac8d1e3\") " pod="openstack/barbican-api-85c7db76fd-f64jq" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.737286 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/60e3d8af-641e-4c2c-b105-3d1b4b98904f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"60e3d8af-641e-4c2c-b105-3d1b4b98904f\") " pod="openstack/cinder-api-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.737328 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/500c37cc-45dd-444f-a630-19356ac8d1e3-config-data-custom\") pod \"barbican-api-85c7db76fd-f64jq\" (UID: \"500c37cc-45dd-444f-a630-19356ac8d1e3\") " pod="openstack/barbican-api-85c7db76fd-f64jq" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.737349 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/500c37cc-45dd-444f-a630-19356ac8d1e3-public-tls-certs\") pod \"barbican-api-85c7db76fd-f64jq\" (UID: \"500c37cc-45dd-444f-a630-19356ac8d1e3\") " pod="openstack/barbican-api-85c7db76fd-f64jq" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.737390 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s57fz\" (UniqueName: \"kubernetes.io/projected/500c37cc-45dd-444f-a630-19356ac8d1e3-kube-api-access-s57fz\") pod \"barbican-api-85c7db76fd-f64jq\" (UID: \"500c37cc-45dd-444f-a630-19356ac8d1e3\") " pod="openstack/barbican-api-85c7db76fd-f64jq" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.737407 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d560e53-d5ef-4b6b-af31-d1b5856dbf47-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-kv96j\" (UID: \"9d560e53-d5ef-4b6b-af31-d1b5856dbf47\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kv96j" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.737424 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn8xf\" (UniqueName: \"kubernetes.io/projected/9d560e53-d5ef-4b6b-af31-d1b5856dbf47-kube-api-access-hn8xf\") pod \"dnsmasq-dns-5c9776ccc5-kv96j\" (UID: \"9d560e53-d5ef-4b6b-af31-d1b5856dbf47\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kv96j" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.737440 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d560e53-d5ef-4b6b-af31-d1b5856dbf47-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-kv96j\" (UID: \"9d560e53-d5ef-4b6b-af31-d1b5856dbf47\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kv96j" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.737482 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/500c37cc-45dd-444f-a630-19356ac8d1e3-combined-ca-bundle\") pod \"barbican-api-85c7db76fd-f64jq\" (UID: \"500c37cc-45dd-444f-a630-19356ac8d1e3\") " pod="openstack/barbican-api-85c7db76fd-f64jq" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.737505 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/60e3d8af-641e-4c2c-b105-3d1b4b98904f-config-data-custom\") pod \"cinder-api-0\" (UID: \"60e3d8af-641e-4c2c-b105-3d1b4b98904f\") " pod="openstack/cinder-api-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.737530 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d560e53-d5ef-4b6b-af31-d1b5856dbf47-config\") pod \"dnsmasq-dns-5c9776ccc5-kv96j\" (UID: \"9d560e53-d5ef-4b6b-af31-d1b5856dbf47\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kv96j" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.737555 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/500c37cc-45dd-444f-a630-19356ac8d1e3-config-data\") pod \"barbican-api-85c7db76fd-f64jq\" (UID: \"500c37cc-45dd-444f-a630-19356ac8d1e3\") " pod="openstack/barbican-api-85c7db76fd-f64jq" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.737576 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60e3d8af-641e-4c2c-b105-3d1b4b98904f-config-data\") pod \"cinder-api-0\" (UID: \"60e3d8af-641e-4c2c-b105-3d1b4b98904f\") " pod="openstack/cinder-api-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.737597 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9h4r\" (UniqueName: \"kubernetes.io/projected/60e3d8af-641e-4c2c-b105-3d1b4b98904f-kube-api-access-q9h4r\") pod \"cinder-api-0\" (UID: \"60e3d8af-641e-4c2c-b105-3d1b4b98904f\") " pod="openstack/cinder-api-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.737665 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60e3d8af-641e-4c2c-b105-3d1b4b98904f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"60e3d8af-641e-4c2c-b105-3d1b4b98904f\") " pod="openstack/cinder-api-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.737692 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d560e53-d5ef-4b6b-af31-d1b5856dbf47-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-kv96j\" (UID: \"9d560e53-d5ef-4b6b-af31-d1b5856dbf47\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kv96j" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.737709 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d560e53-d5ef-4b6b-af31-d1b5856dbf47-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-kv96j\" (UID: \"9d560e53-d5ef-4b6b-af31-d1b5856dbf47\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kv96j" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.737725 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60e3d8af-641e-4c2c-b105-3d1b4b98904f-logs\") pod \"cinder-api-0\" (UID: \"60e3d8af-641e-4c2c-b105-3d1b4b98904f\") " pod="openstack/cinder-api-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.737740 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/500c37cc-45dd-444f-a630-19356ac8d1e3-internal-tls-certs\") pod \"barbican-api-85c7db76fd-f64jq\" (UID: \"500c37cc-45dd-444f-a630-19356ac8d1e3\") " pod="openstack/barbican-api-85c7db76fd-f64jq" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.737766 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60e3d8af-641e-4c2c-b105-3d1b4b98904f-scripts\") pod \"cinder-api-0\" (UID: \"60e3d8af-641e-4c2c-b105-3d1b4b98904f\") " pod="openstack/cinder-api-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.738361 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/60e3d8af-641e-4c2c-b105-3d1b4b98904f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"60e3d8af-641e-4c2c-b105-3d1b4b98904f\") " pod="openstack/cinder-api-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.743406 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d560e53-d5ef-4b6b-af31-d1b5856dbf47-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-kv96j\" (UID: \"9d560e53-d5ef-4b6b-af31-d1b5856dbf47\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kv96j" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.743901 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d560e53-d5ef-4b6b-af31-d1b5856dbf47-config\") pod \"dnsmasq-dns-5c9776ccc5-kv96j\" (UID: \"9d560e53-d5ef-4b6b-af31-d1b5856dbf47\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kv96j" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.745762 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d560e53-d5ef-4b6b-af31-d1b5856dbf47-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-kv96j\" (UID: \"9d560e53-d5ef-4b6b-af31-d1b5856dbf47\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kv96j" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.746294 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d560e53-d5ef-4b6b-af31-d1b5856dbf47-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-kv96j\" (UID: \"9d560e53-d5ef-4b6b-af31-d1b5856dbf47\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kv96j" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.752981 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv6rp\" (UniqueName: \"kubernetes.io/projected/52550d3a-83c6-44fd-87bd-e14b2b6645d9-kube-api-access-zv6rp\") pod \"cinder-scheduler-0\" (UID: \"52550d3a-83c6-44fd-87bd-e14b2b6645d9\") " pod="openstack/cinder-scheduler-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.753796 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52550d3a-83c6-44fd-87bd-e14b2b6645d9-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"52550d3a-83c6-44fd-87bd-e14b2b6645d9\") " pod="openstack/cinder-scheduler-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.754335 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52550d3a-83c6-44fd-87bd-e14b2b6645d9-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"52550d3a-83c6-44fd-87bd-e14b2b6645d9\") " pod="openstack/cinder-scheduler-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.754333 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d560e53-d5ef-4b6b-af31-d1b5856dbf47-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-kv96j\" (UID: \"9d560e53-d5ef-4b6b-af31-d1b5856dbf47\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kv96j" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.755271 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/500c37cc-45dd-444f-a630-19356ac8d1e3-logs\") pod \"barbican-api-85c7db76fd-f64jq\" (UID: \"500c37cc-45dd-444f-a630-19356ac8d1e3\") " pod="openstack/barbican-api-85c7db76fd-f64jq" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.756929 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52550d3a-83c6-44fd-87bd-e14b2b6645d9-config-data\") pod \"cinder-scheduler-0\" (UID: \"52550d3a-83c6-44fd-87bd-e14b2b6645d9\") " pod="openstack/cinder-scheduler-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.757777 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn8xf\" (UniqueName: \"kubernetes.io/projected/9d560e53-d5ef-4b6b-af31-d1b5856dbf47-kube-api-access-hn8xf\") pod \"dnsmasq-dns-5c9776ccc5-kv96j\" (UID: \"9d560e53-d5ef-4b6b-af31-d1b5856dbf47\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kv96j" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.760709 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52550d3a-83c6-44fd-87bd-e14b2b6645d9-scripts\") pod \"cinder-scheduler-0\" (UID: \"52550d3a-83c6-44fd-87bd-e14b2b6645d9\") " pod="openstack/cinder-scheduler-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.762851 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/500c37cc-45dd-444f-a630-19356ac8d1e3-config-data-custom\") pod \"barbican-api-85c7db76fd-f64jq\" (UID: \"500c37cc-45dd-444f-a630-19356ac8d1e3\") " pod="openstack/barbican-api-85c7db76fd-f64jq" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.764869 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9h4r\" (UniqueName: \"kubernetes.io/projected/60e3d8af-641e-4c2c-b105-3d1b4b98904f-kube-api-access-q9h4r\") pod \"cinder-api-0\" (UID: \"60e3d8af-641e-4c2c-b105-3d1b4b98904f\") " pod="openstack/cinder-api-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.769919 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s57fz\" (UniqueName: \"kubernetes.io/projected/500c37cc-45dd-444f-a630-19356ac8d1e3-kube-api-access-s57fz\") pod \"barbican-api-85c7db76fd-f64jq\" (UID: \"500c37cc-45dd-444f-a630-19356ac8d1e3\") " pod="openstack/barbican-api-85c7db76fd-f64jq" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.775641 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60e3d8af-641e-4c2c-b105-3d1b4b98904f-logs\") pod \"cinder-api-0\" (UID: \"60e3d8af-641e-4c2c-b105-3d1b4b98904f\") " pod="openstack/cinder-api-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.778401 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/500c37cc-45dd-444f-a630-19356ac8d1e3-config-data\") pod \"barbican-api-85c7db76fd-f64jq\" (UID: \"500c37cc-45dd-444f-a630-19356ac8d1e3\") " pod="openstack/barbican-api-85c7db76fd-f64jq" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.779916 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/500c37cc-45dd-444f-a630-19356ac8d1e3-public-tls-certs\") pod \"barbican-api-85c7db76fd-f64jq\" (UID: \"500c37cc-45dd-444f-a630-19356ac8d1e3\") " pod="openstack/barbican-api-85c7db76fd-f64jq" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.782659 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60e3d8af-641e-4c2c-b105-3d1b4b98904f-scripts\") pod \"cinder-api-0\" (UID: \"60e3d8af-641e-4c2c-b105-3d1b4b98904f\") " pod="openstack/cinder-api-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.783334 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/500c37cc-45dd-444f-a630-19356ac8d1e3-combined-ca-bundle\") pod \"barbican-api-85c7db76fd-f64jq\" (UID: \"500c37cc-45dd-444f-a630-19356ac8d1e3\") " pod="openstack/barbican-api-85c7db76fd-f64jq" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.784016 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/60e3d8af-641e-4c2c-b105-3d1b4b98904f-config-data-custom\") pod \"cinder-api-0\" (UID: \"60e3d8af-641e-4c2c-b105-3d1b4b98904f\") " pod="openstack/cinder-api-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.803079 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/500c37cc-45dd-444f-a630-19356ac8d1e3-internal-tls-certs\") pod \"barbican-api-85c7db76fd-f64jq\" (UID: \"500c37cc-45dd-444f-a630-19356ac8d1e3\") " pod="openstack/barbican-api-85c7db76fd-f64jq" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.803189 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60e3d8af-641e-4c2c-b105-3d1b4b98904f-config-data\") pod \"cinder-api-0\" (UID: \"60e3d8af-641e-4c2c-b105-3d1b4b98904f\") " pod="openstack/cinder-api-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.803725 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60e3d8af-641e-4c2c-b105-3d1b4b98904f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"60e3d8af-641e-4c2c-b105-3d1b4b98904f\") " pod="openstack/cinder-api-0" Nov 25 11:56:32 crc kubenswrapper[4706]: I1125 11:56:32.833516 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 11:56:33 crc kubenswrapper[4706]: I1125 11:56:33.043811 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-kv96j" Nov 25 11:56:33 crc kubenswrapper[4706]: I1125 11:56:33.049732 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85c7db76fd-f64jq" Nov 25 11:56:33 crc kubenswrapper[4706]: I1125 11:56:33.056787 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 11:56:33 crc kubenswrapper[4706]: I1125 11:56:33.153970 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69546b67d6-65q22" event={"ID":"4fdb06a5-d894-4b1a-ae3c-34c092b4172f","Type":"ContainerStarted","Data":"1b8345c5537388476a73513d1ba19833895f18c5c970fba92ca16f8e77697522"} Nov 25 11:56:33 crc kubenswrapper[4706]: I1125 11:56:33.163536 4706 generic.go:334] "Generic (PLEG): container finished" podID="78bb410c-2722-4620-ad4a-1a9d189d8c92" containerID="df15a61b18c182628568be18e562837544b3b13b406ad2936113a9d40d73e498" exitCode=0 Nov 25 11:56:33 crc kubenswrapper[4706]: I1125 11:56:33.163616 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-d2sx2" event={"ID":"78bb410c-2722-4620-ad4a-1a9d189d8c92","Type":"ContainerDied","Data":"df15a61b18c182628568be18e562837544b3b13b406ad2936113a9d40d73e498"} Nov 25 11:56:33 crc kubenswrapper[4706]: I1125 11:56:33.166340 4706 generic.go:334] "Generic (PLEG): container finished" podID="db4e7aed-28ec-49cd-8f0b-e01df112bf54" containerID="dbc374ca3fd943ed0b6a3b06b2a79b442e6b233bc5b02afea8999bcc193ee4f2" exitCode=0 Nov 25 11:56:33 crc kubenswrapper[4706]: I1125 11:56:33.166364 4706 generic.go:334] "Generic (PLEG): container finished" podID="db4e7aed-28ec-49cd-8f0b-e01df112bf54" containerID="1b0f99a9c2d7134db91d0dc3c0f7d3e579a75185b06822b489e2cf538487e522" exitCode=2 Nov 25 11:56:33 crc kubenswrapper[4706]: I1125 11:56:33.166407 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db4e7aed-28ec-49cd-8f0b-e01df112bf54","Type":"ContainerDied","Data":"dbc374ca3fd943ed0b6a3b06b2a79b442e6b233bc5b02afea8999bcc193ee4f2"} Nov 25 11:56:33 crc kubenswrapper[4706]: I1125 11:56:33.166433 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db4e7aed-28ec-49cd-8f0b-e01df112bf54","Type":"ContainerDied","Data":"1b0f99a9c2d7134db91d0dc3c0f7d3e579a75185b06822b489e2cf538487e522"} Nov 25 11:56:33 crc kubenswrapper[4706]: I1125 11:56:33.509387 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-779dc76bb8-fwppw"] Nov 25 11:56:33 crc kubenswrapper[4706]: I1125 11:56:33.536207 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-9mz7s"] Nov 25 11:56:33 crc kubenswrapper[4706]: I1125 11:56:33.682486 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 11:56:33 crc kubenswrapper[4706]: I1125 11:56:33.881769 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-d2sx2" Nov 25 11:56:33 crc kubenswrapper[4706]: I1125 11:56:33.943840 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-85c7db76fd-f64jq"] Nov 25 11:56:33 crc kubenswrapper[4706]: I1125 11:56:33.962759 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 11:56:33 crc kubenswrapper[4706]: I1125 11:56:33.982636 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-kv96j"] Nov 25 11:56:33 crc kubenswrapper[4706]: I1125 11:56:33.988700 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/78bb410c-2722-4620-ad4a-1a9d189d8c92-dns-swift-storage-0\") pod \"78bb410c-2722-4620-ad4a-1a9d189d8c92\" (UID: \"78bb410c-2722-4620-ad4a-1a9d189d8c92\") " Nov 25 11:56:33 crc kubenswrapper[4706]: I1125 11:56:33.988854 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78bb410c-2722-4620-ad4a-1a9d189d8c92-dns-svc\") pod \"78bb410c-2722-4620-ad4a-1a9d189d8c92\" (UID: \"78bb410c-2722-4620-ad4a-1a9d189d8c92\") " Nov 25 11:56:33 crc kubenswrapper[4706]: I1125 11:56:33.988926 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44r97\" (UniqueName: \"kubernetes.io/projected/78bb410c-2722-4620-ad4a-1a9d189d8c92-kube-api-access-44r97\") pod \"78bb410c-2722-4620-ad4a-1a9d189d8c92\" (UID: \"78bb410c-2722-4620-ad4a-1a9d189d8c92\") " Nov 25 11:56:33 crc kubenswrapper[4706]: I1125 11:56:33.988957 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78bb410c-2722-4620-ad4a-1a9d189d8c92-ovsdbserver-sb\") pod \"78bb410c-2722-4620-ad4a-1a9d189d8c92\" (UID: \"78bb410c-2722-4620-ad4a-1a9d189d8c92\") " Nov 25 11:56:33 crc kubenswrapper[4706]: I1125 11:56:33.988974 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78bb410c-2722-4620-ad4a-1a9d189d8c92-ovsdbserver-nb\") pod \"78bb410c-2722-4620-ad4a-1a9d189d8c92\" (UID: \"78bb410c-2722-4620-ad4a-1a9d189d8c92\") " Nov 25 11:56:33 crc kubenswrapper[4706]: I1125 11:56:33.989033 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78bb410c-2722-4620-ad4a-1a9d189d8c92-config\") pod \"78bb410c-2722-4620-ad4a-1a9d189d8c92\" (UID: \"78bb410c-2722-4620-ad4a-1a9d189d8c92\") " Nov 25 11:56:34 crc kubenswrapper[4706]: I1125 11:56:34.003607 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78bb410c-2722-4620-ad4a-1a9d189d8c92-kube-api-access-44r97" (OuterVolumeSpecName: "kube-api-access-44r97") pod "78bb410c-2722-4620-ad4a-1a9d189d8c92" (UID: "78bb410c-2722-4620-ad4a-1a9d189d8c92"). InnerVolumeSpecName "kube-api-access-44r97". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:56:34 crc kubenswrapper[4706]: I1125 11:56:34.091603 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44r97\" (UniqueName: \"kubernetes.io/projected/78bb410c-2722-4620-ad4a-1a9d189d8c92-kube-api-access-44r97\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:34 crc kubenswrapper[4706]: I1125 11:56:34.095386 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78bb410c-2722-4620-ad4a-1a9d189d8c92-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "78bb410c-2722-4620-ad4a-1a9d189d8c92" (UID: "78bb410c-2722-4620-ad4a-1a9d189d8c92"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:56:34 crc kubenswrapper[4706]: I1125 11:56:34.108427 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78bb410c-2722-4620-ad4a-1a9d189d8c92-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "78bb410c-2722-4620-ad4a-1a9d189d8c92" (UID: "78bb410c-2722-4620-ad4a-1a9d189d8c92"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:56:34 crc kubenswrapper[4706]: I1125 11:56:34.120077 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78bb410c-2722-4620-ad4a-1a9d189d8c92-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "78bb410c-2722-4620-ad4a-1a9d189d8c92" (UID: "78bb410c-2722-4620-ad4a-1a9d189d8c92"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:56:34 crc kubenswrapper[4706]: I1125 11:56:34.123717 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78bb410c-2722-4620-ad4a-1a9d189d8c92-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "78bb410c-2722-4620-ad4a-1a9d189d8c92" (UID: "78bb410c-2722-4620-ad4a-1a9d189d8c92"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:56:34 crc kubenswrapper[4706]: I1125 11:56:34.129656 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78bb410c-2722-4620-ad4a-1a9d189d8c92-config" (OuterVolumeSpecName: "config") pod "78bb410c-2722-4620-ad4a-1a9d189d8c92" (UID: "78bb410c-2722-4620-ad4a-1a9d189d8c92"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:56:34 crc kubenswrapper[4706]: I1125 11:56:34.176950 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69546b67d6-65q22" event={"ID":"4fdb06a5-d894-4b1a-ae3c-34c092b4172f","Type":"ContainerStarted","Data":"208bf2801a5486d50ebfd06ece5a6213f8ea35ba740aa0f51f6b82f0ceae874c"} Nov 25 11:56:34 crc kubenswrapper[4706]: I1125 11:56:34.177051 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-69546b67d6-65q22" Nov 25 11:56:34 crc kubenswrapper[4706]: I1125 11:56:34.177975 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"52550d3a-83c6-44fd-87bd-e14b2b6645d9","Type":"ContainerStarted","Data":"2aed63d04f12b4bf0a76fd1dc15d3806b0be471aade220e51ac4ae25615b4d26"} Nov 25 11:56:34 crc kubenswrapper[4706]: I1125 11:56:34.179394 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85c7db76fd-f64jq" event={"ID":"500c37cc-45dd-444f-a630-19356ac8d1e3","Type":"ContainerStarted","Data":"a689e746de18ff606bbfd87d0d4ce6f78648b9d0ad6652f6db8b19b3f7f99c0e"} Nov 25 11:56:34 crc kubenswrapper[4706]: I1125 11:56:34.180764 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-9mz7s" event={"ID":"2228dc73-369b-4b00-987a-955d0d1ea8c8","Type":"ContainerStarted","Data":"57323b19add16eaad20847287e485b250421fd28f2ce23edf95b036715607a1d"} Nov 25 11:56:34 crc kubenswrapper[4706]: I1125 11:56:34.182586 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-779dc76bb8-fwppw" event={"ID":"6d2de783-5f62-4740-87d8-cef1b4941953","Type":"ContainerStarted","Data":"f016b2f47a82468b9ae3115f6bcfea425e1d701710857e6fc3451b60b8096f52"} Nov 25 11:56:34 crc kubenswrapper[4706]: I1125 11:56:34.185344 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"60e3d8af-641e-4c2c-b105-3d1b4b98904f","Type":"ContainerStarted","Data":"90de65fd65ef7e1f3a095d1a78255ea80bdd49236032c6a6be6f79b440ec2c55"} Nov 25 11:56:34 crc kubenswrapper[4706]: I1125 11:56:34.187850 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-d2sx2" Nov 25 11:56:34 crc kubenswrapper[4706]: I1125 11:56:34.187865 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-d2sx2" event={"ID":"78bb410c-2722-4620-ad4a-1a9d189d8c92","Type":"ContainerDied","Data":"13649aa229d49bdb1a3515188d6688aa1c989b7fd8cac5473bdcea872dd71baf"} Nov 25 11:56:34 crc kubenswrapper[4706]: I1125 11:56:34.187947 4706 scope.go:117] "RemoveContainer" containerID="df15a61b18c182628568be18e562837544b3b13b406ad2936113a9d40d73e498" Nov 25 11:56:34 crc kubenswrapper[4706]: I1125 11:56:34.189909 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-kv96j" event={"ID":"9d560e53-d5ef-4b6b-af31-d1b5856dbf47","Type":"ContainerStarted","Data":"4dcce9be8e09ecea236f24ee7576fed32a1655d2b9e2046d7ffd735c91b0e3a8"} Nov 25 11:56:34 crc kubenswrapper[4706]: I1125 11:56:34.193045 4706 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78bb410c-2722-4620-ad4a-1a9d189d8c92-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:34 crc kubenswrapper[4706]: I1125 11:56:34.193105 4706 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/78bb410c-2722-4620-ad4a-1a9d189d8c92-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:34 crc kubenswrapper[4706]: I1125 11:56:34.193118 4706 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78bb410c-2722-4620-ad4a-1a9d189d8c92-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:34 crc kubenswrapper[4706]: I1125 11:56:34.193127 4706 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78bb410c-2722-4620-ad4a-1a9d189d8c92-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:34 crc kubenswrapper[4706]: I1125 11:56:34.193136 4706 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78bb410c-2722-4620-ad4a-1a9d189d8c92-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:34 crc kubenswrapper[4706]: I1125 11:56:34.201709 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-69546b67d6-65q22" podStartSLOduration=5.201690551 podStartE2EDuration="5.201690551s" podCreationTimestamp="2025-11-25 11:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:56:34.195786332 +0000 UTC m=+1203.110343713" watchObservedRunningTime="2025-11-25 11:56:34.201690551 +0000 UTC m=+1203.116247932" Nov 25 11:56:34 crc kubenswrapper[4706]: I1125 11:56:34.256660 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-d2sx2"] Nov 25 11:56:34 crc kubenswrapper[4706]: I1125 11:56:34.264468 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-d2sx2"] Nov 25 11:56:34 crc kubenswrapper[4706]: I1125 11:56:34.709978 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-69546b67d6-65q22" Nov 25 11:56:35 crc kubenswrapper[4706]: I1125 11:56:35.203463 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85c7db76fd-f64jq" event={"ID":"500c37cc-45dd-444f-a630-19356ac8d1e3","Type":"ContainerStarted","Data":"b2e44eb3f6b93704938fe194c89b3c98b73012463fbe700dc23325d3f2d7abad"} Nov 25 11:56:35 crc kubenswrapper[4706]: I1125 11:56:35.207347 4706 generic.go:334] "Generic (PLEG): container finished" podID="2228dc73-369b-4b00-987a-955d0d1ea8c8" containerID="db6400c04e85d2a56c96e2c984a47347b40c115ad0c787396f14c8f518a9385a" exitCode=0 Nov 25 11:56:35 crc kubenswrapper[4706]: I1125 11:56:35.207421 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-9mz7s" event={"ID":"2228dc73-369b-4b00-987a-955d0d1ea8c8","Type":"ContainerDied","Data":"db6400c04e85d2a56c96e2c984a47347b40c115ad0c787396f14c8f518a9385a"} Nov 25 11:56:35 crc kubenswrapper[4706]: I1125 11:56:35.209949 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"60e3d8af-641e-4c2c-b105-3d1b4b98904f","Type":"ContainerStarted","Data":"9172c3a5a4d92a4d142d21b37162e6f96520ff62c861e838243fbc680cab004a"} Nov 25 11:56:35 crc kubenswrapper[4706]: I1125 11:56:35.213659 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-779dc76bb8-fwppw" event={"ID":"6d2de783-5f62-4740-87d8-cef1b4941953","Type":"ContainerStarted","Data":"90ed7f1fe46c3e584ef27ec512a9e5f7978715acab3cc385b2aa03d78bbad7f5"} Nov 25 11:56:35 crc kubenswrapper[4706]: I1125 11:56:35.217819 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-kv96j" event={"ID":"9d560e53-d5ef-4b6b-af31-d1b5856dbf47","Type":"ContainerDied","Data":"e698be1e556a47e20b0e5192bfed96ae46f7943e750ec588dbcc95dab5a6675f"} Nov 25 11:56:35 crc kubenswrapper[4706]: I1125 11:56:35.217961 4706 generic.go:334] "Generic (PLEG): container finished" podID="9d560e53-d5ef-4b6b-af31-d1b5856dbf47" containerID="e698be1e556a47e20b0e5192bfed96ae46f7943e750ec588dbcc95dab5a6675f" exitCode=0 Nov 25 11:56:35 crc kubenswrapper[4706]: I1125 11:56:35.231604 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db4e7aed-28ec-49cd-8f0b-e01df112bf54","Type":"ContainerDied","Data":"8fc86a2c1073d99eefaa9c298eca352f7130fb64903b505f7a478749a7d6acc1"} Nov 25 11:56:35 crc kubenswrapper[4706]: I1125 11:56:35.231527 4706 generic.go:334] "Generic (PLEG): container finished" podID="db4e7aed-28ec-49cd-8f0b-e01df112bf54" containerID="8fc86a2c1073d99eefaa9c298eca352f7130fb64903b505f7a478749a7d6acc1" exitCode=0 Nov 25 11:56:35 crc kubenswrapper[4706]: I1125 11:56:35.661229 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 25 11:56:35 crc kubenswrapper[4706]: I1125 11:56:35.937519 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78bb410c-2722-4620-ad4a-1a9d189d8c92" path="/var/lib/kubelet/pods/78bb410c-2722-4620-ad4a-1a9d189d8c92/volumes" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.184744 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.237706 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db4e7aed-28ec-49cd-8f0b-e01df112bf54-run-httpd\") pod \"db4e7aed-28ec-49cd-8f0b-e01df112bf54\" (UID: \"db4e7aed-28ec-49cd-8f0b-e01df112bf54\") " Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.237892 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db4e7aed-28ec-49cd-8f0b-e01df112bf54-log-httpd\") pod \"db4e7aed-28ec-49cd-8f0b-e01df112bf54\" (UID: \"db4e7aed-28ec-49cd-8f0b-e01df112bf54\") " Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.238071 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db4e7aed-28ec-49cd-8f0b-e01df112bf54-config-data\") pod \"db4e7aed-28ec-49cd-8f0b-e01df112bf54\" (UID: \"db4e7aed-28ec-49cd-8f0b-e01df112bf54\") " Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.238138 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsmhl\" (UniqueName: \"kubernetes.io/projected/db4e7aed-28ec-49cd-8f0b-e01df112bf54-kube-api-access-fsmhl\") pod \"db4e7aed-28ec-49cd-8f0b-e01df112bf54\" (UID: \"db4e7aed-28ec-49cd-8f0b-e01df112bf54\") " Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.238164 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db4e7aed-28ec-49cd-8f0b-e01df112bf54-scripts\") pod \"db4e7aed-28ec-49cd-8f0b-e01df112bf54\" (UID: \"db4e7aed-28ec-49cd-8f0b-e01df112bf54\") " Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.238218 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e7aed-28ec-49cd-8f0b-e01df112bf54-combined-ca-bundle\") pod \"db4e7aed-28ec-49cd-8f0b-e01df112bf54\" (UID: \"db4e7aed-28ec-49cd-8f0b-e01df112bf54\") " Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.238260 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/db4e7aed-28ec-49cd-8f0b-e01df112bf54-sg-core-conf-yaml\") pod \"db4e7aed-28ec-49cd-8f0b-e01df112bf54\" (UID: \"db4e7aed-28ec-49cd-8f0b-e01df112bf54\") " Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.238503 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db4e7aed-28ec-49cd-8f0b-e01df112bf54-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "db4e7aed-28ec-49cd-8f0b-e01df112bf54" (UID: "db4e7aed-28ec-49cd-8f0b-e01df112bf54"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.238728 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db4e7aed-28ec-49cd-8f0b-e01df112bf54-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "db4e7aed-28ec-49cd-8f0b-e01df112bf54" (UID: "db4e7aed-28ec-49cd-8f0b-e01df112bf54"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.239664 4706 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db4e7aed-28ec-49cd-8f0b-e01df112bf54-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.239962 4706 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db4e7aed-28ec-49cd-8f0b-e01df112bf54-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.245391 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4e7aed-28ec-49cd-8f0b-e01df112bf54-scripts" (OuterVolumeSpecName: "scripts") pod "db4e7aed-28ec-49cd-8f0b-e01df112bf54" (UID: "db4e7aed-28ec-49cd-8f0b-e01df112bf54"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.247871 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db4e7aed-28ec-49cd-8f0b-e01df112bf54-kube-api-access-fsmhl" (OuterVolumeSpecName: "kube-api-access-fsmhl") pod "db4e7aed-28ec-49cd-8f0b-e01df112bf54" (UID: "db4e7aed-28ec-49cd-8f0b-e01df112bf54"). InnerVolumeSpecName "kube-api-access-fsmhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.254188 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.254524 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db4e7aed-28ec-49cd-8f0b-e01df112bf54","Type":"ContainerDied","Data":"955cb3fc2e165c948c48331956713b1450de967f17f453c17e3c8ee3c435554a"} Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.254599 4706 scope.go:117] "RemoveContainer" containerID="dbc374ca3fd943ed0b6a3b06b2a79b442e6b233bc5b02afea8999bcc193ee4f2" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.324479 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4e7aed-28ec-49cd-8f0b-e01df112bf54-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "db4e7aed-28ec-49cd-8f0b-e01df112bf54" (UID: "db4e7aed-28ec-49cd-8f0b-e01df112bf54"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.341817 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsmhl\" (UniqueName: \"kubernetes.io/projected/db4e7aed-28ec-49cd-8f0b-e01df112bf54-kube-api-access-fsmhl\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.341857 4706 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db4e7aed-28ec-49cd-8f0b-e01df112bf54-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.341871 4706 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/db4e7aed-28ec-49cd-8f0b-e01df112bf54-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.394551 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4e7aed-28ec-49cd-8f0b-e01df112bf54-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db4e7aed-28ec-49cd-8f0b-e01df112bf54" (UID: "db4e7aed-28ec-49cd-8f0b-e01df112bf54"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.395370 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4e7aed-28ec-49cd-8f0b-e01df112bf54-config-data" (OuterVolumeSpecName: "config-data") pod "db4e7aed-28ec-49cd-8f0b-e01df112bf54" (UID: "db4e7aed-28ec-49cd-8f0b-e01df112bf54"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.443615 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db4e7aed-28ec-49cd-8f0b-e01df112bf54-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.443644 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e7aed-28ec-49cd-8f0b-e01df112bf54-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.533429 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-9mz7s" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.635932 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.646803 4706 scope.go:117] "RemoveContainer" containerID="1b0f99a9c2d7134db91d0dc3c0f7d3e579a75185b06822b489e2cf538487e522" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.647735 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xs8gg\" (UniqueName: \"kubernetes.io/projected/2228dc73-369b-4b00-987a-955d0d1ea8c8-kube-api-access-xs8gg\") pod \"2228dc73-369b-4b00-987a-955d0d1ea8c8\" (UID: \"2228dc73-369b-4b00-987a-955d0d1ea8c8\") " Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.647792 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2228dc73-369b-4b00-987a-955d0d1ea8c8-dns-swift-storage-0\") pod \"2228dc73-369b-4b00-987a-955d0d1ea8c8\" (UID: \"2228dc73-369b-4b00-987a-955d0d1ea8c8\") " Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.647882 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2228dc73-369b-4b00-987a-955d0d1ea8c8-ovsdbserver-sb\") pod \"2228dc73-369b-4b00-987a-955d0d1ea8c8\" (UID: \"2228dc73-369b-4b00-987a-955d0d1ea8c8\") " Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.647937 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2228dc73-369b-4b00-987a-955d0d1ea8c8-ovsdbserver-nb\") pod \"2228dc73-369b-4b00-987a-955d0d1ea8c8\" (UID: \"2228dc73-369b-4b00-987a-955d0d1ea8c8\") " Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.647989 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2228dc73-369b-4b00-987a-955d0d1ea8c8-dns-svc\") pod \"2228dc73-369b-4b00-987a-955d0d1ea8c8\" (UID: \"2228dc73-369b-4b00-987a-955d0d1ea8c8\") " Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.648015 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2228dc73-369b-4b00-987a-955d0d1ea8c8-config\") pod \"2228dc73-369b-4b00-987a-955d0d1ea8c8\" (UID: \"2228dc73-369b-4b00-987a-955d0d1ea8c8\") " Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.653560 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.660480 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2228dc73-369b-4b00-987a-955d0d1ea8c8-kube-api-access-xs8gg" (OuterVolumeSpecName: "kube-api-access-xs8gg") pod "2228dc73-369b-4b00-987a-955d0d1ea8c8" (UID: "2228dc73-369b-4b00-987a-955d0d1ea8c8"). InnerVolumeSpecName "kube-api-access-xs8gg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.665496 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:56:36 crc kubenswrapper[4706]: E1125 11:56:36.665897 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db4e7aed-28ec-49cd-8f0b-e01df112bf54" containerName="ceilometer-notification-agent" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.665914 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="db4e7aed-28ec-49cd-8f0b-e01df112bf54" containerName="ceilometer-notification-agent" Nov 25 11:56:36 crc kubenswrapper[4706]: E1125 11:56:36.665930 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db4e7aed-28ec-49cd-8f0b-e01df112bf54" containerName="proxy-httpd" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.665938 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="db4e7aed-28ec-49cd-8f0b-e01df112bf54" containerName="proxy-httpd" Nov 25 11:56:36 crc kubenswrapper[4706]: E1125 11:56:36.665953 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db4e7aed-28ec-49cd-8f0b-e01df112bf54" containerName="sg-core" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.665962 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="db4e7aed-28ec-49cd-8f0b-e01df112bf54" containerName="sg-core" Nov 25 11:56:36 crc kubenswrapper[4706]: E1125 11:56:36.665981 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78bb410c-2722-4620-ad4a-1a9d189d8c92" containerName="init" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.665989 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="78bb410c-2722-4620-ad4a-1a9d189d8c92" containerName="init" Nov 25 11:56:36 crc kubenswrapper[4706]: E1125 11:56:36.665996 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2228dc73-369b-4b00-987a-955d0d1ea8c8" containerName="init" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.666002 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="2228dc73-369b-4b00-987a-955d0d1ea8c8" containerName="init" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.666168 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="2228dc73-369b-4b00-987a-955d0d1ea8c8" containerName="init" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.666181 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="78bb410c-2722-4620-ad4a-1a9d189d8c92" containerName="init" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.666191 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="db4e7aed-28ec-49cd-8f0b-e01df112bf54" containerName="sg-core" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.666205 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="db4e7aed-28ec-49cd-8f0b-e01df112bf54" containerName="proxy-httpd" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.666212 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="db4e7aed-28ec-49cd-8f0b-e01df112bf54" containerName="ceilometer-notification-agent" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.667806 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.678424 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.678633 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.691949 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2228dc73-369b-4b00-987a-955d0d1ea8c8-config" (OuterVolumeSpecName: "config") pod "2228dc73-369b-4b00-987a-955d0d1ea8c8" (UID: "2228dc73-369b-4b00-987a-955d0d1ea8c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.698856 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.711794 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2228dc73-369b-4b00-987a-955d0d1ea8c8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2228dc73-369b-4b00-987a-955d0d1ea8c8" (UID: "2228dc73-369b-4b00-987a-955d0d1ea8c8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.737963 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2228dc73-369b-4b00-987a-955d0d1ea8c8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2228dc73-369b-4b00-987a-955d0d1ea8c8" (UID: "2228dc73-369b-4b00-987a-955d0d1ea8c8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.739947 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2228dc73-369b-4b00-987a-955d0d1ea8c8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2228dc73-369b-4b00-987a-955d0d1ea8c8" (UID: "2228dc73-369b-4b00-987a-955d0d1ea8c8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.740404 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2228dc73-369b-4b00-987a-955d0d1ea8c8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2228dc73-369b-4b00-987a-955d0d1ea8c8" (UID: "2228dc73-369b-4b00-987a-955d0d1ea8c8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.751074 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4edea425-7eb5-458b-8e80-3e04fe787998-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4edea425-7eb5-458b-8e80-3e04fe787998\") " pod="openstack/ceilometer-0" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.751140 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4edea425-7eb5-458b-8e80-3e04fe787998-run-httpd\") pod \"ceilometer-0\" (UID: \"4edea425-7eb5-458b-8e80-3e04fe787998\") " pod="openstack/ceilometer-0" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.751165 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4edea425-7eb5-458b-8e80-3e04fe787998-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4edea425-7eb5-458b-8e80-3e04fe787998\") " pod="openstack/ceilometer-0" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.751209 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4edea425-7eb5-458b-8e80-3e04fe787998-config-data\") pod \"ceilometer-0\" (UID: \"4edea425-7eb5-458b-8e80-3e04fe787998\") " pod="openstack/ceilometer-0" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.751266 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dkqc\" (UniqueName: \"kubernetes.io/projected/4edea425-7eb5-458b-8e80-3e04fe787998-kube-api-access-2dkqc\") pod \"ceilometer-0\" (UID: \"4edea425-7eb5-458b-8e80-3e04fe787998\") " pod="openstack/ceilometer-0" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.751289 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4edea425-7eb5-458b-8e80-3e04fe787998-log-httpd\") pod \"ceilometer-0\" (UID: \"4edea425-7eb5-458b-8e80-3e04fe787998\") " pod="openstack/ceilometer-0" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.751333 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4edea425-7eb5-458b-8e80-3e04fe787998-scripts\") pod \"ceilometer-0\" (UID: \"4edea425-7eb5-458b-8e80-3e04fe787998\") " pod="openstack/ceilometer-0" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.751378 4706 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2228dc73-369b-4b00-987a-955d0d1ea8c8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.751391 4706 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2228dc73-369b-4b00-987a-955d0d1ea8c8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.751401 4706 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2228dc73-369b-4b00-987a-955d0d1ea8c8-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.751410 4706 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2228dc73-369b-4b00-987a-955d0d1ea8c8-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.751420 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xs8gg\" (UniqueName: \"kubernetes.io/projected/2228dc73-369b-4b00-987a-955d0d1ea8c8-kube-api-access-xs8gg\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.751430 4706 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2228dc73-369b-4b00-987a-955d0d1ea8c8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.853160 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4edea425-7eb5-458b-8e80-3e04fe787998-run-httpd\") pod \"ceilometer-0\" (UID: \"4edea425-7eb5-458b-8e80-3e04fe787998\") " pod="openstack/ceilometer-0" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.853214 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4edea425-7eb5-458b-8e80-3e04fe787998-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4edea425-7eb5-458b-8e80-3e04fe787998\") " pod="openstack/ceilometer-0" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.853257 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4edea425-7eb5-458b-8e80-3e04fe787998-config-data\") pod \"ceilometer-0\" (UID: \"4edea425-7eb5-458b-8e80-3e04fe787998\") " pod="openstack/ceilometer-0" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.853295 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dkqc\" (UniqueName: \"kubernetes.io/projected/4edea425-7eb5-458b-8e80-3e04fe787998-kube-api-access-2dkqc\") pod \"ceilometer-0\" (UID: \"4edea425-7eb5-458b-8e80-3e04fe787998\") " pod="openstack/ceilometer-0" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.853341 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4edea425-7eb5-458b-8e80-3e04fe787998-log-httpd\") pod \"ceilometer-0\" (UID: \"4edea425-7eb5-458b-8e80-3e04fe787998\") " pod="openstack/ceilometer-0" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.853370 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4edea425-7eb5-458b-8e80-3e04fe787998-scripts\") pod \"ceilometer-0\" (UID: \"4edea425-7eb5-458b-8e80-3e04fe787998\") " pod="openstack/ceilometer-0" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.853409 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4edea425-7eb5-458b-8e80-3e04fe787998-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4edea425-7eb5-458b-8e80-3e04fe787998\") " pod="openstack/ceilometer-0" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.856715 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4edea425-7eb5-458b-8e80-3e04fe787998-run-httpd\") pod \"ceilometer-0\" (UID: \"4edea425-7eb5-458b-8e80-3e04fe787998\") " pod="openstack/ceilometer-0" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.857064 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4edea425-7eb5-458b-8e80-3e04fe787998-log-httpd\") pod \"ceilometer-0\" (UID: \"4edea425-7eb5-458b-8e80-3e04fe787998\") " pod="openstack/ceilometer-0" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.859145 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4edea425-7eb5-458b-8e80-3e04fe787998-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4edea425-7eb5-458b-8e80-3e04fe787998\") " pod="openstack/ceilometer-0" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.862111 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4edea425-7eb5-458b-8e80-3e04fe787998-config-data\") pod \"ceilometer-0\" (UID: \"4edea425-7eb5-458b-8e80-3e04fe787998\") " pod="openstack/ceilometer-0" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.868847 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4edea425-7eb5-458b-8e80-3e04fe787998-scripts\") pod \"ceilometer-0\" (UID: \"4edea425-7eb5-458b-8e80-3e04fe787998\") " pod="openstack/ceilometer-0" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.869323 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4edea425-7eb5-458b-8e80-3e04fe787998-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4edea425-7eb5-458b-8e80-3e04fe787998\") " pod="openstack/ceilometer-0" Nov 25 11:56:36 crc kubenswrapper[4706]: I1125 11:56:36.885562 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dkqc\" (UniqueName: \"kubernetes.io/projected/4edea425-7eb5-458b-8e80-3e04fe787998-kube-api-access-2dkqc\") pod \"ceilometer-0\" (UID: \"4edea425-7eb5-458b-8e80-3e04fe787998\") " pod="openstack/ceilometer-0" Nov 25 11:56:37 crc kubenswrapper[4706]: I1125 11:56:37.006759 4706 scope.go:117] "RemoveContainer" containerID="8fc86a2c1073d99eefaa9c298eca352f7130fb64903b505f7a478749a7d6acc1" Nov 25 11:56:37 crc kubenswrapper[4706]: I1125 11:56:37.006860 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 11:56:37 crc kubenswrapper[4706]: I1125 11:56:37.040198 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5d6465f55b-zdrth" Nov 25 11:56:37 crc kubenswrapper[4706]: I1125 11:56:37.079967 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-85664bf4f6-ws67w" Nov 25 11:56:37 crc kubenswrapper[4706]: I1125 11:56:37.281942 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-kv96j" event={"ID":"9d560e53-d5ef-4b6b-af31-d1b5856dbf47","Type":"ContainerStarted","Data":"7f2e50c7556c207faec757081b15999603dd75cd8b3f0374eb95524e497fdc26"} Nov 25 11:56:37 crc kubenswrapper[4706]: I1125 11:56:37.282482 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-kv96j" Nov 25 11:56:37 crc kubenswrapper[4706]: I1125 11:56:37.290485 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-9mz7s" event={"ID":"2228dc73-369b-4b00-987a-955d0d1ea8c8","Type":"ContainerDied","Data":"57323b19add16eaad20847287e485b250421fd28f2ce23edf95b036715607a1d"} Nov 25 11:56:37 crc kubenswrapper[4706]: I1125 11:56:37.290533 4706 scope.go:117] "RemoveContainer" containerID="db6400c04e85d2a56c96e2c984a47347b40c115ad0c787396f14c8f518a9385a" Nov 25 11:56:37 crc kubenswrapper[4706]: I1125 11:56:37.290659 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-9mz7s" Nov 25 11:56:37 crc kubenswrapper[4706]: I1125 11:56:37.302282 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-kv96j" podStartSLOduration=5.302266463 podStartE2EDuration="5.302266463s" podCreationTimestamp="2025-11-25 11:56:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:56:37.29894313 +0000 UTC m=+1206.213500511" watchObservedRunningTime="2025-11-25 11:56:37.302266463 +0000 UTC m=+1206.216823834" Nov 25 11:56:37 crc kubenswrapper[4706]: I1125 11:56:37.452456 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-9mz7s"] Nov 25 11:56:37 crc kubenswrapper[4706]: I1125 11:56:37.464104 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-9mz7s"] Nov 25 11:56:37 crc kubenswrapper[4706]: W1125 11:56:37.683709 4706 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2228dc73_369b_4b00_987a_955d0d1ea8c8.slice/crio-conmon-db6400c04e85d2a56c96e2c984a47347b40c115ad0c787396f14c8f518a9385a.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2228dc73_369b_4b00_987a_955d0d1ea8c8.slice/crio-conmon-db6400c04e85d2a56c96e2c984a47347b40c115ad0c787396f14c8f518a9385a.scope: no such file or directory Nov 25 11:56:37 crc kubenswrapper[4706]: W1125 11:56:37.684139 4706 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2228dc73_369b_4b00_987a_955d0d1ea8c8.slice/crio-db6400c04e85d2a56c96e2c984a47347b40c115ad0c787396f14c8f518a9385a.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2228dc73_369b_4b00_987a_955d0d1ea8c8.slice/crio-db6400c04e85d2a56c96e2c984a47347b40c115ad0c787396f14c8f518a9385a.scope: no such file or directory Nov 25 11:56:37 crc kubenswrapper[4706]: W1125 11:56:37.684688 4706 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d560e53_d5ef_4b6b_af31_d1b5856dbf47.slice/crio-conmon-e698be1e556a47e20b0e5192bfed96ae46f7943e750ec588dbcc95dab5a6675f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d560e53_d5ef_4b6b_af31_d1b5856dbf47.slice/crio-conmon-e698be1e556a47e20b0e5192bfed96ae46f7943e750ec588dbcc95dab5a6675f.scope: no such file or directory Nov 25 11:56:37 crc kubenswrapper[4706]: W1125 11:56:37.684726 4706 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d560e53_d5ef_4b6b_af31_d1b5856dbf47.slice/crio-e698be1e556a47e20b0e5192bfed96ae46f7943e750ec588dbcc95dab5a6675f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d560e53_d5ef_4b6b_af31_d1b5856dbf47.slice/crio-e698be1e556a47e20b0e5192bfed96ae46f7943e750ec588dbcc95dab5a6675f.scope: no such file or directory Nov 25 11:56:37 crc kubenswrapper[4706]: I1125 11:56:37.725562 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:56:37 crc kubenswrapper[4706]: I1125 11:56:37.977406 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2228dc73-369b-4b00-987a-955d0d1ea8c8" path="/var/lib/kubelet/pods/2228dc73-369b-4b00-987a-955d0d1ea8c8/volumes" Nov 25 11:56:37 crc kubenswrapper[4706]: I1125 11:56:37.978096 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db4e7aed-28ec-49cd-8f0b-e01df112bf54" path="/var/lib/kubelet/pods/db4e7aed-28ec-49cd-8f0b-e01df112bf54/volumes" Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.315035 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f66ccf8d9-g7z69" Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.318520 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85c7db76fd-f64jq" event={"ID":"500c37cc-45dd-444f-a630-19356ac8d1e3","Type":"ContainerStarted","Data":"d0db325fb5b94108e8cad639cbbed3b4a9c2059970cca7daaec3511d453f4481"} Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.318734 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-85c7db76fd-f64jq" Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.318819 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-85c7db76fd-f64jq" Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.322679 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"60e3d8af-641e-4c2c-b105-3d1b4b98904f","Type":"ContainerStarted","Data":"98d8b014a535b17e29ca946fbcc980dcf786569a83ca1d31e699b4f7a9197dae"} Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.322791 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="60e3d8af-641e-4c2c-b105-3d1b4b98904f" containerName="cinder-api-log" containerID="cri-o://9172c3a5a4d92a4d142d21b37162e6f96520ff62c861e838243fbc680cab004a" gracePeriod=30 Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.322973 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.323010 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="60e3d8af-641e-4c2c-b105-3d1b4b98904f" containerName="cinder-api" containerID="cri-o://98d8b014a535b17e29ca946fbcc980dcf786569a83ca1d31e699b4f7a9197dae" gracePeriod=30 Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.327166 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-779dc76bb8-fwppw" event={"ID":"6d2de783-5f62-4740-87d8-cef1b4941953","Type":"ContainerStarted","Data":"02b48970b5c92dfb6a9103f7137e53df7dd178574e3611a855155f1b079a9a9e"} Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.327946 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-779dc76bb8-fwppw" Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.349234 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4edea425-7eb5-458b-8e80-3e04fe787998","Type":"ContainerStarted","Data":"c97d8c07a84b25192fe7846c3bb693d44d48fe82befc96c781f5f9d4db45db19"} Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.370533 4706 generic.go:334] "Generic (PLEG): container finished" podID="a2972ef2-0543-48bd-9982-4f1c88711e0d" containerID="63a95daf4ab5d5a244b24ec8e7154621aad984a12bcb6a3a7d6be1c0e61157e0" exitCode=137 Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.370815 4706 generic.go:334] "Generic (PLEG): container finished" podID="a2972ef2-0543-48bd-9982-4f1c88711e0d" containerID="2a410c3eb2cffad7492f2f267cf609f01c6deadfc79957b4d1eb2f1a688f7768" exitCode=137 Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.370887 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f66ccf8d9-g7z69" event={"ID":"a2972ef2-0543-48bd-9982-4f1c88711e0d","Type":"ContainerDied","Data":"63a95daf4ab5d5a244b24ec8e7154621aad984a12bcb6a3a7d6be1c0e61157e0"} Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.370913 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f66ccf8d9-g7z69" event={"ID":"a2972ef2-0543-48bd-9982-4f1c88711e0d","Type":"ContainerDied","Data":"2a410c3eb2cffad7492f2f267cf609f01c6deadfc79957b4d1eb2f1a688f7768"} Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.370928 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f66ccf8d9-g7z69" event={"ID":"a2972ef2-0543-48bd-9982-4f1c88711e0d","Type":"ContainerDied","Data":"f2eb1430ae89e9d9827ec74a43c3f19f436d90232cfcbecd3fda70e64a994340"} Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.370944 4706 scope.go:117] "RemoveContainer" containerID="63a95daf4ab5d5a244b24ec8e7154621aad984a12bcb6a3a7d6be1c0e61157e0" Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.371090 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f66ccf8d9-g7z69" Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.380247 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-85c7db76fd-f64jq" podStartSLOduration=6.380223773 podStartE2EDuration="6.380223773s" podCreationTimestamp="2025-11-25 11:56:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:56:38.360685641 +0000 UTC m=+1207.275243022" watchObservedRunningTime="2025-11-25 11:56:38.380223773 +0000 UTC m=+1207.294781154" Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.394326 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7fc64dc5d7-m6cqm" event={"ID":"ac9c3625-3935-48b4-abf3-a8330d99152d","Type":"ContainerStarted","Data":"300c36d740c2822609b2a757685cdd79802045b58530ce91fa4c9caf43b3de52"} Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.394371 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7fc64dc5d7-m6cqm" event={"ID":"ac9c3625-3935-48b4-abf3-a8330d99152d","Type":"ContainerStarted","Data":"a4fc96a20082bc0324fe2e9a7974e34b2a431a29bd3c46885164f362bf372312"} Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.416103 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-779dc76bb8-fwppw" podStartSLOduration=6.416084285 podStartE2EDuration="6.416084285s" podCreationTimestamp="2025-11-25 11:56:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:56:38.391351123 +0000 UTC m=+1207.305908504" watchObservedRunningTime="2025-11-25 11:56:38.416084285 +0000 UTC m=+1207.330641666" Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.423708 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.423689107 podStartE2EDuration="6.423689107s" podCreationTimestamp="2025-11-25 11:56:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:56:38.423453691 +0000 UTC m=+1207.338011072" watchObservedRunningTime="2025-11-25 11:56:38.423689107 +0000 UTC m=+1207.338246488" Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.436434 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6c9c496566-jrgpl" event={"ID":"2ea4caef-6e53-42ac-9202-cf4b05a28041","Type":"ContainerStarted","Data":"a4835c6e179faf2994130df1b2ca6ccbcf529a9d522eca8c4067da4d666be185"} Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.436483 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6c9c496566-jrgpl" event={"ID":"2ea4caef-6e53-42ac-9202-cf4b05a28041","Type":"ContainerStarted","Data":"ee5b4d60f76ecb93ab04f3fab72068a2975cb297639609ddeed7649c9cffad33"} Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.458215 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7fc64dc5d7-m6cqm" podStartSLOduration=4.352357426 podStartE2EDuration="9.458195705s" podCreationTimestamp="2025-11-25 11:56:29 +0000 UTC" firstStartedPulling="2025-11-25 11:56:31.640599726 +0000 UTC m=+1200.555157107" lastFinishedPulling="2025-11-25 11:56:36.746438005 +0000 UTC m=+1205.660995386" observedRunningTime="2025-11-25 11:56:38.452548003 +0000 UTC m=+1207.367105384" watchObservedRunningTime="2025-11-25 11:56:38.458195705 +0000 UTC m=+1207.372753086" Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.472713 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6c9c496566-jrgpl" podStartSLOduration=3.91341559 podStartE2EDuration="9.47268242s" podCreationTimestamp="2025-11-25 11:56:29 +0000 UTC" firstStartedPulling="2025-11-25 11:56:31.437737731 +0000 UTC m=+1200.352295112" lastFinishedPulling="2025-11-25 11:56:36.997004561 +0000 UTC m=+1205.911561942" observedRunningTime="2025-11-25 11:56:38.470149736 +0000 UTC m=+1207.384707117" watchObservedRunningTime="2025-11-25 11:56:38.47268242 +0000 UTC m=+1207.387239791" Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.503659 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2972ef2-0543-48bd-9982-4f1c88711e0d-scripts\") pod \"a2972ef2-0543-48bd-9982-4f1c88711e0d\" (UID: \"a2972ef2-0543-48bd-9982-4f1c88711e0d\") " Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.504277 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2972ef2-0543-48bd-9982-4f1c88711e0d-logs\") pod \"a2972ef2-0543-48bd-9982-4f1c88711e0d\" (UID: \"a2972ef2-0543-48bd-9982-4f1c88711e0d\") " Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.504344 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a2972ef2-0543-48bd-9982-4f1c88711e0d-horizon-secret-key\") pod \"a2972ef2-0543-48bd-9982-4f1c88711e0d\" (UID: \"a2972ef2-0543-48bd-9982-4f1c88711e0d\") " Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.504369 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5vmm\" (UniqueName: \"kubernetes.io/projected/a2972ef2-0543-48bd-9982-4f1c88711e0d-kube-api-access-x5vmm\") pod \"a2972ef2-0543-48bd-9982-4f1c88711e0d\" (UID: \"a2972ef2-0543-48bd-9982-4f1c88711e0d\") " Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.504391 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2972ef2-0543-48bd-9982-4f1c88711e0d-config-data\") pod \"a2972ef2-0543-48bd-9982-4f1c88711e0d\" (UID: \"a2972ef2-0543-48bd-9982-4f1c88711e0d\") " Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.506365 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2972ef2-0543-48bd-9982-4f1c88711e0d-logs" (OuterVolumeSpecName: "logs") pod "a2972ef2-0543-48bd-9982-4f1c88711e0d" (UID: "a2972ef2-0543-48bd-9982-4f1c88711e0d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.508923 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2972ef2-0543-48bd-9982-4f1c88711e0d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "a2972ef2-0543-48bd-9982-4f1c88711e0d" (UID: "a2972ef2-0543-48bd-9982-4f1c88711e0d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.516597 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2972ef2-0543-48bd-9982-4f1c88711e0d-kube-api-access-x5vmm" (OuterVolumeSpecName: "kube-api-access-x5vmm") pod "a2972ef2-0543-48bd-9982-4f1c88711e0d" (UID: "a2972ef2-0543-48bd-9982-4f1c88711e0d"). InnerVolumeSpecName "kube-api-access-x5vmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.542569 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2972ef2-0543-48bd-9982-4f1c88711e0d-config-data" (OuterVolumeSpecName: "config-data") pod "a2972ef2-0543-48bd-9982-4f1c88711e0d" (UID: "a2972ef2-0543-48bd-9982-4f1c88711e0d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.552081 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2972ef2-0543-48bd-9982-4f1c88711e0d-scripts" (OuterVolumeSpecName: "scripts") pod "a2972ef2-0543-48bd-9982-4f1c88711e0d" (UID: "a2972ef2-0543-48bd-9982-4f1c88711e0d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.606094 4706 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2972ef2-0543-48bd-9982-4f1c88711e0d-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.606126 4706 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2972ef2-0543-48bd-9982-4f1c88711e0d-logs\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.606135 4706 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a2972ef2-0543-48bd-9982-4f1c88711e0d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.606145 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5vmm\" (UniqueName: \"kubernetes.io/projected/a2972ef2-0543-48bd-9982-4f1c88711e0d-kube-api-access-x5vmm\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.606154 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2972ef2-0543-48bd-9982-4f1c88711e0d-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.702507 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f66ccf8d9-g7z69"] Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.720221 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6f66ccf8d9-g7z69"] Nov 25 11:56:38 crc kubenswrapper[4706]: I1125 11:56:38.911773 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-69546b67d6-65q22" Nov 25 11:56:39 crc kubenswrapper[4706]: I1125 11:56:39.447270 4706 generic.go:334] "Generic (PLEG): container finished" podID="c785321d-b637-4f3a-9e69-bc237eb1e9c2" containerID="c4a013e0fb3180c3b1cbcb24ceee6c1e232c442bf84a3c119951be9b3e401dad" exitCode=137 Nov 25 11:56:39 crc kubenswrapper[4706]: I1125 11:56:39.447324 4706 generic.go:334] "Generic (PLEG): container finished" podID="c785321d-b637-4f3a-9e69-bc237eb1e9c2" containerID="4817762576b72f1f7ec6a73dfc5771238bc51194d1e5bb978c08087145039f4d" exitCode=137 Nov 25 11:56:39 crc kubenswrapper[4706]: I1125 11:56:39.447370 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6899b4bd6f-vwrfh" event={"ID":"c785321d-b637-4f3a-9e69-bc237eb1e9c2","Type":"ContainerDied","Data":"c4a013e0fb3180c3b1cbcb24ceee6c1e232c442bf84a3c119951be9b3e401dad"} Nov 25 11:56:39 crc kubenswrapper[4706]: I1125 11:56:39.447396 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6899b4bd6f-vwrfh" event={"ID":"c785321d-b637-4f3a-9e69-bc237eb1e9c2","Type":"ContainerDied","Data":"4817762576b72f1f7ec6a73dfc5771238bc51194d1e5bb978c08087145039f4d"} Nov 25 11:56:39 crc kubenswrapper[4706]: I1125 11:56:39.448888 4706 generic.go:334] "Generic (PLEG): container finished" podID="60e3d8af-641e-4c2c-b105-3d1b4b98904f" containerID="9172c3a5a4d92a4d142d21b37162e6f96520ff62c861e838243fbc680cab004a" exitCode=143 Nov 25 11:56:39 crc kubenswrapper[4706]: I1125 11:56:39.448936 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"60e3d8af-641e-4c2c-b105-3d1b4b98904f","Type":"ContainerDied","Data":"9172c3a5a4d92a4d142d21b37162e6f96520ff62c861e838243fbc680cab004a"} Nov 25 11:56:39 crc kubenswrapper[4706]: I1125 11:56:39.451521 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"52550d3a-83c6-44fd-87bd-e14b2b6645d9","Type":"ContainerStarted","Data":"7200f253342a4606b46ecca291ca17699ff36d5cee3f8441314bdde5ef17f081"} Nov 25 11:56:39 crc kubenswrapper[4706]: I1125 11:56:39.870253 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7964f7f8cc-7zjzw"] Nov 25 11:56:39 crc kubenswrapper[4706]: E1125 11:56:39.879103 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2972ef2-0543-48bd-9982-4f1c88711e0d" containerName="horizon" Nov 25 11:56:39 crc kubenswrapper[4706]: I1125 11:56:39.879143 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2972ef2-0543-48bd-9982-4f1c88711e0d" containerName="horizon" Nov 25 11:56:39 crc kubenswrapper[4706]: E1125 11:56:39.879168 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2972ef2-0543-48bd-9982-4f1c88711e0d" containerName="horizon-log" Nov 25 11:56:39 crc kubenswrapper[4706]: I1125 11:56:39.879178 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2972ef2-0543-48bd-9982-4f1c88711e0d" containerName="horizon-log" Nov 25 11:56:39 crc kubenswrapper[4706]: I1125 11:56:39.879577 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2972ef2-0543-48bd-9982-4f1c88711e0d" containerName="horizon-log" Nov 25 11:56:39 crc kubenswrapper[4706]: I1125 11:56:39.879622 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2972ef2-0543-48bd-9982-4f1c88711e0d" containerName="horizon" Nov 25 11:56:39 crc kubenswrapper[4706]: I1125 11:56:39.880804 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7964f7f8cc-7zjzw" Nov 25 11:56:39 crc kubenswrapper[4706]: I1125 11:56:39.883839 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 25 11:56:39 crc kubenswrapper[4706]: I1125 11:56:39.884720 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 25 11:56:39 crc kubenswrapper[4706]: I1125 11:56:39.895210 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7964f7f8cc-7zjzw"] Nov 25 11:56:39 crc kubenswrapper[4706]: I1125 11:56:39.934461 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b108b69d-0dd8-4945-aa38-c2caee99bac1-config\") pod \"neutron-7964f7f8cc-7zjzw\" (UID: \"b108b69d-0dd8-4945-aa38-c2caee99bac1\") " pod="openstack/neutron-7964f7f8cc-7zjzw" Nov 25 11:56:39 crc kubenswrapper[4706]: I1125 11:56:39.934510 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b108b69d-0dd8-4945-aa38-c2caee99bac1-httpd-config\") pod \"neutron-7964f7f8cc-7zjzw\" (UID: \"b108b69d-0dd8-4945-aa38-c2caee99bac1\") " pod="openstack/neutron-7964f7f8cc-7zjzw" Nov 25 11:56:39 crc kubenswrapper[4706]: I1125 11:56:39.934586 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b108b69d-0dd8-4945-aa38-c2caee99bac1-combined-ca-bundle\") pod \"neutron-7964f7f8cc-7zjzw\" (UID: \"b108b69d-0dd8-4945-aa38-c2caee99bac1\") " pod="openstack/neutron-7964f7f8cc-7zjzw" Nov 25 11:56:39 crc kubenswrapper[4706]: I1125 11:56:39.934663 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b108b69d-0dd8-4945-aa38-c2caee99bac1-ovndb-tls-certs\") pod \"neutron-7964f7f8cc-7zjzw\" (UID: \"b108b69d-0dd8-4945-aa38-c2caee99bac1\") " pod="openstack/neutron-7964f7f8cc-7zjzw" Nov 25 11:56:39 crc kubenswrapper[4706]: I1125 11:56:39.934688 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz22n\" (UniqueName: \"kubernetes.io/projected/b108b69d-0dd8-4945-aa38-c2caee99bac1-kube-api-access-pz22n\") pod \"neutron-7964f7f8cc-7zjzw\" (UID: \"b108b69d-0dd8-4945-aa38-c2caee99bac1\") " pod="openstack/neutron-7964f7f8cc-7zjzw" Nov 25 11:56:39 crc kubenswrapper[4706]: I1125 11:56:39.934717 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b108b69d-0dd8-4945-aa38-c2caee99bac1-internal-tls-certs\") pod \"neutron-7964f7f8cc-7zjzw\" (UID: \"b108b69d-0dd8-4945-aa38-c2caee99bac1\") " pod="openstack/neutron-7964f7f8cc-7zjzw" Nov 25 11:56:39 crc kubenswrapper[4706]: I1125 11:56:39.934739 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b108b69d-0dd8-4945-aa38-c2caee99bac1-public-tls-certs\") pod \"neutron-7964f7f8cc-7zjzw\" (UID: \"b108b69d-0dd8-4945-aa38-c2caee99bac1\") " pod="openstack/neutron-7964f7f8cc-7zjzw" Nov 25 11:56:39 crc kubenswrapper[4706]: I1125 11:56:39.935383 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2972ef2-0543-48bd-9982-4f1c88711e0d" path="/var/lib/kubelet/pods/a2972ef2-0543-48bd-9982-4f1c88711e0d/volumes" Nov 25 11:56:40 crc kubenswrapper[4706]: I1125 11:56:40.036760 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b108b69d-0dd8-4945-aa38-c2caee99bac1-httpd-config\") pod \"neutron-7964f7f8cc-7zjzw\" (UID: \"b108b69d-0dd8-4945-aa38-c2caee99bac1\") " pod="openstack/neutron-7964f7f8cc-7zjzw" Nov 25 11:56:40 crc kubenswrapper[4706]: I1125 11:56:40.036841 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b108b69d-0dd8-4945-aa38-c2caee99bac1-combined-ca-bundle\") pod \"neutron-7964f7f8cc-7zjzw\" (UID: \"b108b69d-0dd8-4945-aa38-c2caee99bac1\") " pod="openstack/neutron-7964f7f8cc-7zjzw" Nov 25 11:56:40 crc kubenswrapper[4706]: I1125 11:56:40.036900 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b108b69d-0dd8-4945-aa38-c2caee99bac1-ovndb-tls-certs\") pod \"neutron-7964f7f8cc-7zjzw\" (UID: \"b108b69d-0dd8-4945-aa38-c2caee99bac1\") " pod="openstack/neutron-7964f7f8cc-7zjzw" Nov 25 11:56:40 crc kubenswrapper[4706]: I1125 11:56:40.036919 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz22n\" (UniqueName: \"kubernetes.io/projected/b108b69d-0dd8-4945-aa38-c2caee99bac1-kube-api-access-pz22n\") pod \"neutron-7964f7f8cc-7zjzw\" (UID: \"b108b69d-0dd8-4945-aa38-c2caee99bac1\") " pod="openstack/neutron-7964f7f8cc-7zjzw" Nov 25 11:56:40 crc kubenswrapper[4706]: I1125 11:56:40.036942 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b108b69d-0dd8-4945-aa38-c2caee99bac1-internal-tls-certs\") pod \"neutron-7964f7f8cc-7zjzw\" (UID: \"b108b69d-0dd8-4945-aa38-c2caee99bac1\") " pod="openstack/neutron-7964f7f8cc-7zjzw" Nov 25 11:56:40 crc kubenswrapper[4706]: I1125 11:56:40.036960 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b108b69d-0dd8-4945-aa38-c2caee99bac1-public-tls-certs\") pod \"neutron-7964f7f8cc-7zjzw\" (UID: \"b108b69d-0dd8-4945-aa38-c2caee99bac1\") " pod="openstack/neutron-7964f7f8cc-7zjzw" Nov 25 11:56:40 crc kubenswrapper[4706]: I1125 11:56:40.037031 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b108b69d-0dd8-4945-aa38-c2caee99bac1-config\") pod \"neutron-7964f7f8cc-7zjzw\" (UID: \"b108b69d-0dd8-4945-aa38-c2caee99bac1\") " pod="openstack/neutron-7964f7f8cc-7zjzw" Nov 25 11:56:40 crc kubenswrapper[4706]: I1125 11:56:40.044235 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b108b69d-0dd8-4945-aa38-c2caee99bac1-config\") pod \"neutron-7964f7f8cc-7zjzw\" (UID: \"b108b69d-0dd8-4945-aa38-c2caee99bac1\") " pod="openstack/neutron-7964f7f8cc-7zjzw" Nov 25 11:56:40 crc kubenswrapper[4706]: I1125 11:56:40.045529 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b108b69d-0dd8-4945-aa38-c2caee99bac1-ovndb-tls-certs\") pod \"neutron-7964f7f8cc-7zjzw\" (UID: \"b108b69d-0dd8-4945-aa38-c2caee99bac1\") " pod="openstack/neutron-7964f7f8cc-7zjzw" Nov 25 11:56:40 crc kubenswrapper[4706]: I1125 11:56:40.046023 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b108b69d-0dd8-4945-aa38-c2caee99bac1-public-tls-certs\") pod \"neutron-7964f7f8cc-7zjzw\" (UID: \"b108b69d-0dd8-4945-aa38-c2caee99bac1\") " pod="openstack/neutron-7964f7f8cc-7zjzw" Nov 25 11:56:40 crc kubenswrapper[4706]: I1125 11:56:40.049796 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b108b69d-0dd8-4945-aa38-c2caee99bac1-httpd-config\") pod \"neutron-7964f7f8cc-7zjzw\" (UID: \"b108b69d-0dd8-4945-aa38-c2caee99bac1\") " pod="openstack/neutron-7964f7f8cc-7zjzw" Nov 25 11:56:40 crc kubenswrapper[4706]: I1125 11:56:40.050535 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b108b69d-0dd8-4945-aa38-c2caee99bac1-internal-tls-certs\") pod \"neutron-7964f7f8cc-7zjzw\" (UID: \"b108b69d-0dd8-4945-aa38-c2caee99bac1\") " pod="openstack/neutron-7964f7f8cc-7zjzw" Nov 25 11:56:40 crc kubenswrapper[4706]: I1125 11:56:40.062185 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz22n\" (UniqueName: \"kubernetes.io/projected/b108b69d-0dd8-4945-aa38-c2caee99bac1-kube-api-access-pz22n\") pod \"neutron-7964f7f8cc-7zjzw\" (UID: \"b108b69d-0dd8-4945-aa38-c2caee99bac1\") " pod="openstack/neutron-7964f7f8cc-7zjzw" Nov 25 11:56:40 crc kubenswrapper[4706]: I1125 11:56:40.065640 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b108b69d-0dd8-4945-aa38-c2caee99bac1-combined-ca-bundle\") pod \"neutron-7964f7f8cc-7zjzw\" (UID: \"b108b69d-0dd8-4945-aa38-c2caee99bac1\") " pod="openstack/neutron-7964f7f8cc-7zjzw" Nov 25 11:56:40 crc kubenswrapper[4706]: I1125 11:56:40.198742 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7964f7f8cc-7zjzw" Nov 25 11:56:40 crc kubenswrapper[4706]: I1125 11:56:40.636588 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-85664bf4f6-ws67w" Nov 25 11:56:40 crc kubenswrapper[4706]: I1125 11:56:40.689405 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5d6465f55b-zdrth" Nov 25 11:56:40 crc kubenswrapper[4706]: I1125 11:56:40.718917 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5d6465f55b-zdrth"] Nov 25 11:56:41 crc kubenswrapper[4706]: I1125 11:56:41.077156 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-69546b67d6-65q22" Nov 25 11:56:41 crc kubenswrapper[4706]: I1125 11:56:41.479955 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5d6465f55b-zdrth" podUID="74b33eb1-0020-4037-918c-9e747dcfd61f" containerName="horizon-log" containerID="cri-o://779cce40cf4cc4947bddf2063a31d045574d3997800d880ef7c40c01c42a4f70" gracePeriod=30 Nov 25 11:56:41 crc kubenswrapper[4706]: I1125 11:56:41.480017 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5d6465f55b-zdrth" podUID="74b33eb1-0020-4037-918c-9e747dcfd61f" containerName="horizon" containerID="cri-o://5f702a091e203894b9c68bd117079bc8a175269c6b226c33e9f95d472f2849bf" gracePeriod=30 Nov 25 11:56:41 crc kubenswrapper[4706]: I1125 11:56:41.708966 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-85c7db76fd-f64jq" Nov 25 11:56:43 crc kubenswrapper[4706]: I1125 11:56:43.046465 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-kv96j" Nov 25 11:56:43 crc kubenswrapper[4706]: I1125 11:56:43.120819 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-vhqcg"] Nov 25 11:56:43 crc kubenswrapper[4706]: I1125 11:56:43.121086 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-vhqcg" podUID="3e3d141e-c4bd-479f-998d-a3ecfcf87156" containerName="dnsmasq-dns" containerID="cri-o://c646a9abef8d5cb12444aeaed4a6d33c4f4e34dd5b4a8eee3c936cc5f06db823" gracePeriod=10 Nov 25 11:56:43 crc kubenswrapper[4706]: I1125 11:56:43.910659 4706 scope.go:117] "RemoveContainer" containerID="2a410c3eb2cffad7492f2f267cf609f01c6deadfc79957b4d1eb2f1a688f7768" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.000653 4706 scope.go:117] "RemoveContainer" containerID="63a95daf4ab5d5a244b24ec8e7154621aad984a12bcb6a3a7d6be1c0e61157e0" Nov 25 11:56:44 crc kubenswrapper[4706]: E1125 11:56:44.019163 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63a95daf4ab5d5a244b24ec8e7154621aad984a12bcb6a3a7d6be1c0e61157e0\": container with ID starting with 63a95daf4ab5d5a244b24ec8e7154621aad984a12bcb6a3a7d6be1c0e61157e0 not found: ID does not exist" containerID="63a95daf4ab5d5a244b24ec8e7154621aad984a12bcb6a3a7d6be1c0e61157e0" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.019219 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63a95daf4ab5d5a244b24ec8e7154621aad984a12bcb6a3a7d6be1c0e61157e0"} err="failed to get container status \"63a95daf4ab5d5a244b24ec8e7154621aad984a12bcb6a3a7d6be1c0e61157e0\": rpc error: code = NotFound desc = could not find container \"63a95daf4ab5d5a244b24ec8e7154621aad984a12bcb6a3a7d6be1c0e61157e0\": container with ID starting with 63a95daf4ab5d5a244b24ec8e7154621aad984a12bcb6a3a7d6be1c0e61157e0 not found: ID does not exist" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.019250 4706 scope.go:117] "RemoveContainer" containerID="2a410c3eb2cffad7492f2f267cf609f01c6deadfc79957b4d1eb2f1a688f7768" Nov 25 11:56:44 crc kubenswrapper[4706]: E1125 11:56:44.028465 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a410c3eb2cffad7492f2f267cf609f01c6deadfc79957b4d1eb2f1a688f7768\": container with ID starting with 2a410c3eb2cffad7492f2f267cf609f01c6deadfc79957b4d1eb2f1a688f7768 not found: ID does not exist" containerID="2a410c3eb2cffad7492f2f267cf609f01c6deadfc79957b4d1eb2f1a688f7768" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.028517 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a410c3eb2cffad7492f2f267cf609f01c6deadfc79957b4d1eb2f1a688f7768"} err="failed to get container status \"2a410c3eb2cffad7492f2f267cf609f01c6deadfc79957b4d1eb2f1a688f7768\": rpc error: code = NotFound desc = could not find container \"2a410c3eb2cffad7492f2f267cf609f01c6deadfc79957b4d1eb2f1a688f7768\": container with ID starting with 2a410c3eb2cffad7492f2f267cf609f01c6deadfc79957b4d1eb2f1a688f7768 not found: ID does not exist" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.028548 4706 scope.go:117] "RemoveContainer" containerID="63a95daf4ab5d5a244b24ec8e7154621aad984a12bcb6a3a7d6be1c0e61157e0" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.029932 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63a95daf4ab5d5a244b24ec8e7154621aad984a12bcb6a3a7d6be1c0e61157e0"} err="failed to get container status \"63a95daf4ab5d5a244b24ec8e7154621aad984a12bcb6a3a7d6be1c0e61157e0\": rpc error: code = NotFound desc = could not find container \"63a95daf4ab5d5a244b24ec8e7154621aad984a12bcb6a3a7d6be1c0e61157e0\": container with ID starting with 63a95daf4ab5d5a244b24ec8e7154621aad984a12bcb6a3a7d6be1c0e61157e0 not found: ID does not exist" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.030008 4706 scope.go:117] "RemoveContainer" containerID="2a410c3eb2cffad7492f2f267cf609f01c6deadfc79957b4d1eb2f1a688f7768" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.030512 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a410c3eb2cffad7492f2f267cf609f01c6deadfc79957b4d1eb2f1a688f7768"} err="failed to get container status \"2a410c3eb2cffad7492f2f267cf609f01c6deadfc79957b4d1eb2f1a688f7768\": rpc error: code = NotFound desc = could not find container \"2a410c3eb2cffad7492f2f267cf609f01c6deadfc79957b4d1eb2f1a688f7768\": container with ID starting with 2a410c3eb2cffad7492f2f267cf609f01c6deadfc79957b4d1eb2f1a688f7768 not found: ID does not exist" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.186630 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6899b4bd6f-vwrfh" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.263343 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-vhqcg" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.312069 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c785321d-b637-4f3a-9e69-bc237eb1e9c2-horizon-secret-key\") pod \"c785321d-b637-4f3a-9e69-bc237eb1e9c2\" (UID: \"c785321d-b637-4f3a-9e69-bc237eb1e9c2\") " Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.312380 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c785321d-b637-4f3a-9e69-bc237eb1e9c2-config-data\") pod \"c785321d-b637-4f3a-9e69-bc237eb1e9c2\" (UID: \"c785321d-b637-4f3a-9e69-bc237eb1e9c2\") " Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.312479 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c785321d-b637-4f3a-9e69-bc237eb1e9c2-scripts\") pod \"c785321d-b637-4f3a-9e69-bc237eb1e9c2\" (UID: \"c785321d-b637-4f3a-9e69-bc237eb1e9c2\") " Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.312656 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bn5w7\" (UniqueName: \"kubernetes.io/projected/c785321d-b637-4f3a-9e69-bc237eb1e9c2-kube-api-access-bn5w7\") pod \"c785321d-b637-4f3a-9e69-bc237eb1e9c2\" (UID: \"c785321d-b637-4f3a-9e69-bc237eb1e9c2\") " Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.312723 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c785321d-b637-4f3a-9e69-bc237eb1e9c2-logs\") pod \"c785321d-b637-4f3a-9e69-bc237eb1e9c2\" (UID: \"c785321d-b637-4f3a-9e69-bc237eb1e9c2\") " Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.314003 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c785321d-b637-4f3a-9e69-bc237eb1e9c2-logs" (OuterVolumeSpecName: "logs") pod "c785321d-b637-4f3a-9e69-bc237eb1e9c2" (UID: "c785321d-b637-4f3a-9e69-bc237eb1e9c2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.317857 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c785321d-b637-4f3a-9e69-bc237eb1e9c2-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c785321d-b637-4f3a-9e69-bc237eb1e9c2" (UID: "c785321d-b637-4f3a-9e69-bc237eb1e9c2"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.321927 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c785321d-b637-4f3a-9e69-bc237eb1e9c2-kube-api-access-bn5w7" (OuterVolumeSpecName: "kube-api-access-bn5w7") pod "c785321d-b637-4f3a-9e69-bc237eb1e9c2" (UID: "c785321d-b637-4f3a-9e69-bc237eb1e9c2"). InnerVolumeSpecName "kube-api-access-bn5w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.353371 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c785321d-b637-4f3a-9e69-bc237eb1e9c2-config-data" (OuterVolumeSpecName: "config-data") pod "c785321d-b637-4f3a-9e69-bc237eb1e9c2" (UID: "c785321d-b637-4f3a-9e69-bc237eb1e9c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.370133 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c785321d-b637-4f3a-9e69-bc237eb1e9c2-scripts" (OuterVolumeSpecName: "scripts") pod "c785321d-b637-4f3a-9e69-bc237eb1e9c2" (UID: "c785321d-b637-4f3a-9e69-bc237eb1e9c2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.414624 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgwhj\" (UniqueName: \"kubernetes.io/projected/3e3d141e-c4bd-479f-998d-a3ecfcf87156-kube-api-access-zgwhj\") pod \"3e3d141e-c4bd-479f-998d-a3ecfcf87156\" (UID: \"3e3d141e-c4bd-479f-998d-a3ecfcf87156\") " Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.414753 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3e3d141e-c4bd-479f-998d-a3ecfcf87156-dns-swift-storage-0\") pod \"3e3d141e-c4bd-479f-998d-a3ecfcf87156\" (UID: \"3e3d141e-c4bd-479f-998d-a3ecfcf87156\") " Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.415119 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e3d141e-c4bd-479f-998d-a3ecfcf87156-ovsdbserver-nb\") pod \"3e3d141e-c4bd-479f-998d-a3ecfcf87156\" (UID: \"3e3d141e-c4bd-479f-998d-a3ecfcf87156\") " Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.415187 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e3d141e-c4bd-479f-998d-a3ecfcf87156-dns-svc\") pod \"3e3d141e-c4bd-479f-998d-a3ecfcf87156\" (UID: \"3e3d141e-c4bd-479f-998d-a3ecfcf87156\") " Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.415249 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e3d141e-c4bd-479f-998d-a3ecfcf87156-ovsdbserver-sb\") pod \"3e3d141e-c4bd-479f-998d-a3ecfcf87156\" (UID: \"3e3d141e-c4bd-479f-998d-a3ecfcf87156\") " Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.415478 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e3d141e-c4bd-479f-998d-a3ecfcf87156-config\") pod \"3e3d141e-c4bd-479f-998d-a3ecfcf87156\" (UID: \"3e3d141e-c4bd-479f-998d-a3ecfcf87156\") " Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.415952 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bn5w7\" (UniqueName: \"kubernetes.io/projected/c785321d-b637-4f3a-9e69-bc237eb1e9c2-kube-api-access-bn5w7\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.415974 4706 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c785321d-b637-4f3a-9e69-bc237eb1e9c2-logs\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.415987 4706 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c785321d-b637-4f3a-9e69-bc237eb1e9c2-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.416000 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c785321d-b637-4f3a-9e69-bc237eb1e9c2-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.416011 4706 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c785321d-b637-4f3a-9e69-bc237eb1e9c2-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.417603 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e3d141e-c4bd-479f-998d-a3ecfcf87156-kube-api-access-zgwhj" (OuterVolumeSpecName: "kube-api-access-zgwhj") pod "3e3d141e-c4bd-479f-998d-a3ecfcf87156" (UID: "3e3d141e-c4bd-479f-998d-a3ecfcf87156"). InnerVolumeSpecName "kube-api-access-zgwhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.469801 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e3d141e-c4bd-479f-998d-a3ecfcf87156-config" (OuterVolumeSpecName: "config") pod "3e3d141e-c4bd-479f-998d-a3ecfcf87156" (UID: "3e3d141e-c4bd-479f-998d-a3ecfcf87156"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.470833 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e3d141e-c4bd-479f-998d-a3ecfcf87156-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3e3d141e-c4bd-479f-998d-a3ecfcf87156" (UID: "3e3d141e-c4bd-479f-998d-a3ecfcf87156"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.476388 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e3d141e-c4bd-479f-998d-a3ecfcf87156-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3e3d141e-c4bd-479f-998d-a3ecfcf87156" (UID: "3e3d141e-c4bd-479f-998d-a3ecfcf87156"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.483992 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e3d141e-c4bd-479f-998d-a3ecfcf87156-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3e3d141e-c4bd-479f-998d-a3ecfcf87156" (UID: "3e3d141e-c4bd-479f-998d-a3ecfcf87156"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.492035 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e3d141e-c4bd-479f-998d-a3ecfcf87156-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3e3d141e-c4bd-479f-998d-a3ecfcf87156" (UID: "3e3d141e-c4bd-479f-998d-a3ecfcf87156"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.513367 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4edea425-7eb5-458b-8e80-3e04fe787998","Type":"ContainerStarted","Data":"9939373b15134b1719de1987d50545c2cfa39a6d3e179e0ca908a425e5b68532"} Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.517137 4706 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e3d141e-c4bd-479f-998d-a3ecfcf87156-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.517161 4706 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e3d141e-c4bd-479f-998d-a3ecfcf87156-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.517170 4706 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e3d141e-c4bd-479f-998d-a3ecfcf87156-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.517178 4706 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e3d141e-c4bd-479f-998d-a3ecfcf87156-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.517188 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgwhj\" (UniqueName: \"kubernetes.io/projected/3e3d141e-c4bd-479f-998d-a3ecfcf87156-kube-api-access-zgwhj\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.517197 4706 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3e3d141e-c4bd-479f-998d-a3ecfcf87156-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.521336 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6899b4bd6f-vwrfh" event={"ID":"c785321d-b637-4f3a-9e69-bc237eb1e9c2","Type":"ContainerDied","Data":"5eaa56f42f6412675dc9c60f4529f3d1f87ca00e542a17c07d190c59afc633c3"} Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.521372 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6899b4bd6f-vwrfh" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.521381 4706 scope.go:117] "RemoveContainer" containerID="c4a013e0fb3180c3b1cbcb24ceee6c1e232c442bf84a3c119951be9b3e401dad" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.524716 4706 generic.go:334] "Generic (PLEG): container finished" podID="3e3d141e-c4bd-479f-998d-a3ecfcf87156" containerID="c646a9abef8d5cb12444aeaed4a6d33c4f4e34dd5b4a8eee3c936cc5f06db823" exitCode=0 Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.524762 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-vhqcg" event={"ID":"3e3d141e-c4bd-479f-998d-a3ecfcf87156","Type":"ContainerDied","Data":"c646a9abef8d5cb12444aeaed4a6d33c4f4e34dd5b4a8eee3c936cc5f06db823"} Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.524795 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-vhqcg" event={"ID":"3e3d141e-c4bd-479f-998d-a3ecfcf87156","Type":"ContainerDied","Data":"679ecb1e74993b3f971e280018c9c610d1bf4e1b24eef64f5a75a637d1a9e1aa"} Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.524844 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-vhqcg" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.568813 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-vhqcg"] Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.576751 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-vhqcg"] Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.588149 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6899b4bd6f-vwrfh"] Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.602233 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6899b4bd6f-vwrfh"] Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.650176 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7964f7f8cc-7zjzw"] Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.682232 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5d6465f55b-zdrth" podUID="74b33eb1-0020-4037-918c-9e747dcfd61f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.730103 4706 scope.go:117] "RemoveContainer" containerID="4817762576b72f1f7ec6a73dfc5771238bc51194d1e5bb978c08087145039f4d" Nov 25 11:56:44 crc kubenswrapper[4706]: W1125 11:56:44.734616 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb108b69d_0dd8_4945_aa38_c2caee99bac1.slice/crio-f56ae713856a66142c33c049a1d41a4db9a66ff68b5b9bb0d762186eb5839312 WatchSource:0}: Error finding container f56ae713856a66142c33c049a1d41a4db9a66ff68b5b9bb0d762186eb5839312: Status 404 returned error can't find the container with id f56ae713856a66142c33c049a1d41a4db9a66ff68b5b9bb0d762186eb5839312 Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.988135 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-85c7db76fd-f64jq" Nov 25 11:56:44 crc kubenswrapper[4706]: I1125 11:56:44.994344 4706 scope.go:117] "RemoveContainer" containerID="c646a9abef8d5cb12444aeaed4a6d33c4f4e34dd5b4a8eee3c936cc5f06db823" Nov 25 11:56:45 crc kubenswrapper[4706]: I1125 11:56:45.037580 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-69546b67d6-65q22"] Nov 25 11:56:45 crc kubenswrapper[4706]: I1125 11:56:45.037778 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-69546b67d6-65q22" podUID="4fdb06a5-d894-4b1a-ae3c-34c092b4172f" containerName="barbican-api-log" containerID="cri-o://1b8345c5537388476a73513d1ba19833895f18c5c970fba92ca16f8e77697522" gracePeriod=30 Nov 25 11:56:45 crc kubenswrapper[4706]: I1125 11:56:45.038127 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-69546b67d6-65q22" podUID="4fdb06a5-d894-4b1a-ae3c-34c092b4172f" containerName="barbican-api" containerID="cri-o://208bf2801a5486d50ebfd06ece5a6213f8ea35ba740aa0f51f6b82f0ceae874c" gracePeriod=30 Nov 25 11:56:45 crc kubenswrapper[4706]: I1125 11:56:45.038471 4706 scope.go:117] "RemoveContainer" containerID="913d4321d424e69a6bdcfbd8200e69aa3977bf6954e3a6a96d637ecff3fcf51f" Nov 25 11:56:45 crc kubenswrapper[4706]: I1125 11:56:45.099535 4706 scope.go:117] "RemoveContainer" containerID="c646a9abef8d5cb12444aeaed4a6d33c4f4e34dd5b4a8eee3c936cc5f06db823" Nov 25 11:56:45 crc kubenswrapper[4706]: E1125 11:56:45.100008 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c646a9abef8d5cb12444aeaed4a6d33c4f4e34dd5b4a8eee3c936cc5f06db823\": container with ID starting with c646a9abef8d5cb12444aeaed4a6d33c4f4e34dd5b4a8eee3c936cc5f06db823 not found: ID does not exist" containerID="c646a9abef8d5cb12444aeaed4a6d33c4f4e34dd5b4a8eee3c936cc5f06db823" Nov 25 11:56:45 crc kubenswrapper[4706]: I1125 11:56:45.100052 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c646a9abef8d5cb12444aeaed4a6d33c4f4e34dd5b4a8eee3c936cc5f06db823"} err="failed to get container status \"c646a9abef8d5cb12444aeaed4a6d33c4f4e34dd5b4a8eee3c936cc5f06db823\": rpc error: code = NotFound desc = could not find container \"c646a9abef8d5cb12444aeaed4a6d33c4f4e34dd5b4a8eee3c936cc5f06db823\": container with ID starting with c646a9abef8d5cb12444aeaed4a6d33c4f4e34dd5b4a8eee3c936cc5f06db823 not found: ID does not exist" Nov 25 11:56:45 crc kubenswrapper[4706]: I1125 11:56:45.100080 4706 scope.go:117] "RemoveContainer" containerID="913d4321d424e69a6bdcfbd8200e69aa3977bf6954e3a6a96d637ecff3fcf51f" Nov 25 11:56:45 crc kubenswrapper[4706]: E1125 11:56:45.100447 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"913d4321d424e69a6bdcfbd8200e69aa3977bf6954e3a6a96d637ecff3fcf51f\": container with ID starting with 913d4321d424e69a6bdcfbd8200e69aa3977bf6954e3a6a96d637ecff3fcf51f not found: ID does not exist" containerID="913d4321d424e69a6bdcfbd8200e69aa3977bf6954e3a6a96d637ecff3fcf51f" Nov 25 11:56:45 crc kubenswrapper[4706]: I1125 11:56:45.100476 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"913d4321d424e69a6bdcfbd8200e69aa3977bf6954e3a6a96d637ecff3fcf51f"} err="failed to get container status \"913d4321d424e69a6bdcfbd8200e69aa3977bf6954e3a6a96d637ecff3fcf51f\": rpc error: code = NotFound desc = could not find container \"913d4321d424e69a6bdcfbd8200e69aa3977bf6954e3a6a96d637ecff3fcf51f\": container with ID starting with 913d4321d424e69a6bdcfbd8200e69aa3977bf6954e3a6a96d637ecff3fcf51f not found: ID does not exist" Nov 25 11:56:45 crc kubenswrapper[4706]: I1125 11:56:45.376747 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 25 11:56:45 crc kubenswrapper[4706]: I1125 11:56:45.550265 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7964f7f8cc-7zjzw" event={"ID":"b108b69d-0dd8-4945-aa38-c2caee99bac1","Type":"ContainerStarted","Data":"dea9cd01df4ba8cf8c651f530a511383f553a2cc14b20c48bf6f17f64b596dde"} Nov 25 11:56:45 crc kubenswrapper[4706]: I1125 11:56:45.550335 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7964f7f8cc-7zjzw" event={"ID":"b108b69d-0dd8-4945-aa38-c2caee99bac1","Type":"ContainerStarted","Data":"61d16983e237152d33670a65dd2213457b0dff0fce1b64431c8708f625c38632"} Nov 25 11:56:45 crc kubenswrapper[4706]: I1125 11:56:45.550351 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7964f7f8cc-7zjzw" event={"ID":"b108b69d-0dd8-4945-aa38-c2caee99bac1","Type":"ContainerStarted","Data":"f56ae713856a66142c33c049a1d41a4db9a66ff68b5b9bb0d762186eb5839312"} Nov 25 11:56:45 crc kubenswrapper[4706]: I1125 11:56:45.550399 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7964f7f8cc-7zjzw" Nov 25 11:56:45 crc kubenswrapper[4706]: I1125 11:56:45.557662 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4edea425-7eb5-458b-8e80-3e04fe787998","Type":"ContainerStarted","Data":"47a97f3bdaec06814548cc758ac0496c95daf22a53f74fb0ef28b454eb733c97"} Nov 25 11:56:45 crc kubenswrapper[4706]: I1125 11:56:45.559212 4706 generic.go:334] "Generic (PLEG): container finished" podID="74b33eb1-0020-4037-918c-9e747dcfd61f" containerID="5f702a091e203894b9c68bd117079bc8a175269c6b226c33e9f95d472f2849bf" exitCode=0 Nov 25 11:56:45 crc kubenswrapper[4706]: I1125 11:56:45.559254 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5d6465f55b-zdrth" event={"ID":"74b33eb1-0020-4037-918c-9e747dcfd61f","Type":"ContainerDied","Data":"5f702a091e203894b9c68bd117079bc8a175269c6b226c33e9f95d472f2849bf"} Nov 25 11:56:45 crc kubenswrapper[4706]: I1125 11:56:45.561687 4706 generic.go:334] "Generic (PLEG): container finished" podID="4fdb06a5-d894-4b1a-ae3c-34c092b4172f" containerID="1b8345c5537388476a73513d1ba19833895f18c5c970fba92ca16f8e77697522" exitCode=143 Nov 25 11:56:45 crc kubenswrapper[4706]: I1125 11:56:45.561726 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69546b67d6-65q22" event={"ID":"4fdb06a5-d894-4b1a-ae3c-34c092b4172f","Type":"ContainerDied","Data":"1b8345c5537388476a73513d1ba19833895f18c5c970fba92ca16f8e77697522"} Nov 25 11:56:45 crc kubenswrapper[4706]: I1125 11:56:45.563206 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"52550d3a-83c6-44fd-87bd-e14b2b6645d9","Type":"ContainerStarted","Data":"c5bd04e6883225ee113b6c78562ccd17a0785d63ef82ee46cd93be8c0817442c"} Nov 25 11:56:45 crc kubenswrapper[4706]: I1125 11:56:45.574728 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7964f7f8cc-7zjzw" podStartSLOduration=6.574711647 podStartE2EDuration="6.574711647s" podCreationTimestamp="2025-11-25 11:56:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:56:45.573073666 +0000 UTC m=+1214.487631047" watchObservedRunningTime="2025-11-25 11:56:45.574711647 +0000 UTC m=+1214.489269028" Nov 25 11:56:45 crc kubenswrapper[4706]: I1125 11:56:45.601155 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=10.125948322 podStartE2EDuration="13.601130742s" podCreationTimestamp="2025-11-25 11:56:32 +0000 UTC" firstStartedPulling="2025-11-25 11:56:33.708071678 +0000 UTC m=+1202.622629059" lastFinishedPulling="2025-11-25 11:56:37.183254098 +0000 UTC m=+1206.097811479" observedRunningTime="2025-11-25 11:56:45.594085185 +0000 UTC m=+1214.508642566" watchObservedRunningTime="2025-11-25 11:56:45.601130742 +0000 UTC m=+1214.515688113" Nov 25 11:56:45 crc kubenswrapper[4706]: I1125 11:56:45.933135 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e3d141e-c4bd-479f-998d-a3ecfcf87156" path="/var/lib/kubelet/pods/3e3d141e-c4bd-479f-998d-a3ecfcf87156/volumes" Nov 25 11:56:45 crc kubenswrapper[4706]: I1125 11:56:45.934101 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c785321d-b637-4f3a-9e69-bc237eb1e9c2" path="/var/lib/kubelet/pods/c785321d-b637-4f3a-9e69-bc237eb1e9c2/volumes" Nov 25 11:56:46 crc kubenswrapper[4706]: I1125 11:56:46.574237 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4edea425-7eb5-458b-8e80-3e04fe787998","Type":"ContainerStarted","Data":"a67668dcfd526e20e133d248873ac04478998be08ad46cdeabc6b648780977cf"} Nov 25 11:56:47 crc kubenswrapper[4706]: I1125 11:56:47.590792 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4edea425-7eb5-458b-8e80-3e04fe787998","Type":"ContainerStarted","Data":"da8d8f9f30a6576ef3b63ec5f392f77d70f616bc32d763fc3d425cf0de901590"} Nov 25 11:56:47 crc kubenswrapper[4706]: I1125 11:56:47.591363 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 11:56:47 crc kubenswrapper[4706]: I1125 11:56:47.632738 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.3019326319999998 podStartE2EDuration="11.632715791s" podCreationTimestamp="2025-11-25 11:56:36 +0000 UTC" firstStartedPulling="2025-11-25 11:56:37.777405841 +0000 UTC m=+1206.691963212" lastFinishedPulling="2025-11-25 11:56:47.10818899 +0000 UTC m=+1216.022746371" observedRunningTime="2025-11-25 11:56:47.627680554 +0000 UTC m=+1216.542237935" watchObservedRunningTime="2025-11-25 11:56:47.632715791 +0000 UTC m=+1216.547273172" Nov 25 11:56:47 crc kubenswrapper[4706]: I1125 11:56:47.834695 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 25 11:56:48 crc kubenswrapper[4706]: I1125 11:56:48.063702 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 25 11:56:48 crc kubenswrapper[4706]: I1125 11:56:48.601388 4706 generic.go:334] "Generic (PLEG): container finished" podID="4fdb06a5-d894-4b1a-ae3c-34c092b4172f" containerID="208bf2801a5486d50ebfd06ece5a6213f8ea35ba740aa0f51f6b82f0ceae874c" exitCode=0 Nov 25 11:56:48 crc kubenswrapper[4706]: I1125 11:56:48.602385 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69546b67d6-65q22" event={"ID":"4fdb06a5-d894-4b1a-ae3c-34c092b4172f","Type":"ContainerDied","Data":"208bf2801a5486d50ebfd06ece5a6213f8ea35ba740aa0f51f6b82f0ceae874c"} Nov 25 11:56:48 crc kubenswrapper[4706]: I1125 11:56:48.602435 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69546b67d6-65q22" event={"ID":"4fdb06a5-d894-4b1a-ae3c-34c092b4172f","Type":"ContainerDied","Data":"a7d352f8c4acb86b308a76af9b28adf5569dc024b09a5943335f004141888d8e"} Nov 25 11:56:48 crc kubenswrapper[4706]: I1125 11:56:48.602451 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7d352f8c4acb86b308a76af9b28adf5569dc024b09a5943335f004141888d8e" Nov 25 11:56:48 crc kubenswrapper[4706]: I1125 11:56:48.607223 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-69546b67d6-65q22" Nov 25 11:56:48 crc kubenswrapper[4706]: I1125 11:56:48.638774 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4fdb06a5-d894-4b1a-ae3c-34c092b4172f-config-data-custom\") pod \"4fdb06a5-d894-4b1a-ae3c-34c092b4172f\" (UID: \"4fdb06a5-d894-4b1a-ae3c-34c092b4172f\") " Nov 25 11:56:48 crc kubenswrapper[4706]: I1125 11:56:48.638822 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fdb06a5-d894-4b1a-ae3c-34c092b4172f-logs\") pod \"4fdb06a5-d894-4b1a-ae3c-34c092b4172f\" (UID: \"4fdb06a5-d894-4b1a-ae3c-34c092b4172f\") " Nov 25 11:56:48 crc kubenswrapper[4706]: I1125 11:56:48.638855 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fdb06a5-d894-4b1a-ae3c-34c092b4172f-config-data\") pod \"4fdb06a5-d894-4b1a-ae3c-34c092b4172f\" (UID: \"4fdb06a5-d894-4b1a-ae3c-34c092b4172f\") " Nov 25 11:56:48 crc kubenswrapper[4706]: I1125 11:56:48.638879 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn69v\" (UniqueName: \"kubernetes.io/projected/4fdb06a5-d894-4b1a-ae3c-34c092b4172f-kube-api-access-rn69v\") pod \"4fdb06a5-d894-4b1a-ae3c-34c092b4172f\" (UID: \"4fdb06a5-d894-4b1a-ae3c-34c092b4172f\") " Nov 25 11:56:48 crc kubenswrapper[4706]: I1125 11:56:48.638915 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fdb06a5-d894-4b1a-ae3c-34c092b4172f-combined-ca-bundle\") pod \"4fdb06a5-d894-4b1a-ae3c-34c092b4172f\" (UID: \"4fdb06a5-d894-4b1a-ae3c-34c092b4172f\") " Nov 25 11:56:48 crc kubenswrapper[4706]: I1125 11:56:48.645328 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fdb06a5-d894-4b1a-ae3c-34c092b4172f-logs" (OuterVolumeSpecName: "logs") pod "4fdb06a5-d894-4b1a-ae3c-34c092b4172f" (UID: "4fdb06a5-d894-4b1a-ae3c-34c092b4172f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:56:48 crc kubenswrapper[4706]: I1125 11:56:48.647216 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fdb06a5-d894-4b1a-ae3c-34c092b4172f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4fdb06a5-d894-4b1a-ae3c-34c092b4172f" (UID: "4fdb06a5-d894-4b1a-ae3c-34c092b4172f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:48 crc kubenswrapper[4706]: I1125 11:56:48.655154 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fdb06a5-d894-4b1a-ae3c-34c092b4172f-kube-api-access-rn69v" (OuterVolumeSpecName: "kube-api-access-rn69v") pod "4fdb06a5-d894-4b1a-ae3c-34c092b4172f" (UID: "4fdb06a5-d894-4b1a-ae3c-34c092b4172f"). InnerVolumeSpecName "kube-api-access-rn69v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:56:48 crc kubenswrapper[4706]: I1125 11:56:48.662334 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 11:56:48 crc kubenswrapper[4706]: I1125 11:56:48.688817 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fdb06a5-d894-4b1a-ae3c-34c092b4172f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4fdb06a5-d894-4b1a-ae3c-34c092b4172f" (UID: "4fdb06a5-d894-4b1a-ae3c-34c092b4172f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:48 crc kubenswrapper[4706]: I1125 11:56:48.708632 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fdb06a5-d894-4b1a-ae3c-34c092b4172f-config-data" (OuterVolumeSpecName: "config-data") pod "4fdb06a5-d894-4b1a-ae3c-34c092b4172f" (UID: "4fdb06a5-d894-4b1a-ae3c-34c092b4172f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:48 crc kubenswrapper[4706]: I1125 11:56:48.740166 4706 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4fdb06a5-d894-4b1a-ae3c-34c092b4172f-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:48 crc kubenswrapper[4706]: I1125 11:56:48.740201 4706 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fdb06a5-d894-4b1a-ae3c-34c092b4172f-logs\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:48 crc kubenswrapper[4706]: I1125 11:56:48.740211 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fdb06a5-d894-4b1a-ae3c-34c092b4172f-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:48 crc kubenswrapper[4706]: I1125 11:56:48.740220 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn69v\" (UniqueName: \"kubernetes.io/projected/4fdb06a5-d894-4b1a-ae3c-34c092b4172f-kube-api-access-rn69v\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:48 crc kubenswrapper[4706]: I1125 11:56:48.740235 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fdb06a5-d894-4b1a-ae3c-34c092b4172f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:49 crc kubenswrapper[4706]: I1125 11:56:49.610019 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-69546b67d6-65q22" Nov 25 11:56:49 crc kubenswrapper[4706]: I1125 11:56:49.610166 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="52550d3a-83c6-44fd-87bd-e14b2b6645d9" containerName="cinder-scheduler" containerID="cri-o://7200f253342a4606b46ecca291ca17699ff36d5cee3f8441314bdde5ef17f081" gracePeriod=30 Nov 25 11:56:49 crc kubenswrapper[4706]: I1125 11:56:49.610331 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="52550d3a-83c6-44fd-87bd-e14b2b6645d9" containerName="probe" containerID="cri-o://c5bd04e6883225ee113b6c78562ccd17a0785d63ef82ee46cd93be8c0817442c" gracePeriod=30 Nov 25 11:56:49 crc kubenswrapper[4706]: I1125 11:56:49.667938 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-69546b67d6-65q22"] Nov 25 11:56:49 crc kubenswrapper[4706]: I1125 11:56:49.677033 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-69546b67d6-65q22"] Nov 25 11:56:49 crc kubenswrapper[4706]: I1125 11:56:49.950945 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fdb06a5-d894-4b1a-ae3c-34c092b4172f" path="/var/lib/kubelet/pods/4fdb06a5-d894-4b1a-ae3c-34c092b4172f/volumes" Nov 25 11:56:50 crc kubenswrapper[4706]: I1125 11:56:50.619183 4706 generic.go:334] "Generic (PLEG): container finished" podID="52550d3a-83c6-44fd-87bd-e14b2b6645d9" containerID="c5bd04e6883225ee113b6c78562ccd17a0785d63ef82ee46cd93be8c0817442c" exitCode=0 Nov 25 11:56:50 crc kubenswrapper[4706]: I1125 11:56:50.619449 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"52550d3a-83c6-44fd-87bd-e14b2b6645d9","Type":"ContainerDied","Data":"c5bd04e6883225ee113b6c78562ccd17a0785d63ef82ee46cd93be8c0817442c"} Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.097752 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.289933 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zv6rp\" (UniqueName: \"kubernetes.io/projected/52550d3a-83c6-44fd-87bd-e14b2b6645d9-kube-api-access-zv6rp\") pod \"52550d3a-83c6-44fd-87bd-e14b2b6645d9\" (UID: \"52550d3a-83c6-44fd-87bd-e14b2b6645d9\") " Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.290033 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52550d3a-83c6-44fd-87bd-e14b2b6645d9-config-data\") pod \"52550d3a-83c6-44fd-87bd-e14b2b6645d9\" (UID: \"52550d3a-83c6-44fd-87bd-e14b2b6645d9\") " Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.290114 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52550d3a-83c6-44fd-87bd-e14b2b6645d9-combined-ca-bundle\") pod \"52550d3a-83c6-44fd-87bd-e14b2b6645d9\" (UID: \"52550d3a-83c6-44fd-87bd-e14b2b6645d9\") " Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.290191 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52550d3a-83c6-44fd-87bd-e14b2b6645d9-config-data-custom\") pod \"52550d3a-83c6-44fd-87bd-e14b2b6645d9\" (UID: \"52550d3a-83c6-44fd-87bd-e14b2b6645d9\") " Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.290217 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52550d3a-83c6-44fd-87bd-e14b2b6645d9-scripts\") pod \"52550d3a-83c6-44fd-87bd-e14b2b6645d9\" (UID: \"52550d3a-83c6-44fd-87bd-e14b2b6645d9\") " Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.290255 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/52550d3a-83c6-44fd-87bd-e14b2b6645d9-etc-machine-id\") pod \"52550d3a-83c6-44fd-87bd-e14b2b6645d9\" (UID: \"52550d3a-83c6-44fd-87bd-e14b2b6645d9\") " Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.290578 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52550d3a-83c6-44fd-87bd-e14b2b6645d9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "52550d3a-83c6-44fd-87bd-e14b2b6645d9" (UID: "52550d3a-83c6-44fd-87bd-e14b2b6645d9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.290834 4706 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/52550d3a-83c6-44fd-87bd-e14b2b6645d9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.299542 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52550d3a-83c6-44fd-87bd-e14b2b6645d9-kube-api-access-zv6rp" (OuterVolumeSpecName: "kube-api-access-zv6rp") pod "52550d3a-83c6-44fd-87bd-e14b2b6645d9" (UID: "52550d3a-83c6-44fd-87bd-e14b2b6645d9"). InnerVolumeSpecName "kube-api-access-zv6rp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.306472 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52550d3a-83c6-44fd-87bd-e14b2b6645d9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "52550d3a-83c6-44fd-87bd-e14b2b6645d9" (UID: "52550d3a-83c6-44fd-87bd-e14b2b6645d9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.309466 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52550d3a-83c6-44fd-87bd-e14b2b6645d9-scripts" (OuterVolumeSpecName: "scripts") pod "52550d3a-83c6-44fd-87bd-e14b2b6645d9" (UID: "52550d3a-83c6-44fd-87bd-e14b2b6645d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.354469 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52550d3a-83c6-44fd-87bd-e14b2b6645d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52550d3a-83c6-44fd-87bd-e14b2b6645d9" (UID: "52550d3a-83c6-44fd-87bd-e14b2b6645d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.392844 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52550d3a-83c6-44fd-87bd-e14b2b6645d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.392883 4706 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52550d3a-83c6-44fd-87bd-e14b2b6645d9-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.392897 4706 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52550d3a-83c6-44fd-87bd-e14b2b6645d9-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.392909 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zv6rp\" (UniqueName: \"kubernetes.io/projected/52550d3a-83c6-44fd-87bd-e14b2b6645d9-kube-api-access-zv6rp\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.461626 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52550d3a-83c6-44fd-87bd-e14b2b6645d9-config-data" (OuterVolumeSpecName: "config-data") pod "52550d3a-83c6-44fd-87bd-e14b2b6645d9" (UID: "52550d3a-83c6-44fd-87bd-e14b2b6645d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.494748 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52550d3a-83c6-44fd-87bd-e14b2b6645d9-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.632577 4706 generic.go:334] "Generic (PLEG): container finished" podID="52550d3a-83c6-44fd-87bd-e14b2b6645d9" containerID="7200f253342a4606b46ecca291ca17699ff36d5cee3f8441314bdde5ef17f081" exitCode=0 Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.632625 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.632634 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"52550d3a-83c6-44fd-87bd-e14b2b6645d9","Type":"ContainerDied","Data":"7200f253342a4606b46ecca291ca17699ff36d5cee3f8441314bdde5ef17f081"} Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.632698 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"52550d3a-83c6-44fd-87bd-e14b2b6645d9","Type":"ContainerDied","Data":"2aed63d04f12b4bf0a76fd1dc15d3806b0be471aade220e51ac4ae25615b4d26"} Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.632744 4706 scope.go:117] "RemoveContainer" containerID="c5bd04e6883225ee113b6c78562ccd17a0785d63ef82ee46cd93be8c0817442c" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.669092 4706 scope.go:117] "RemoveContainer" containerID="7200f253342a4606b46ecca291ca17699ff36d5cee3f8441314bdde5ef17f081" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.676329 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.688738 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.693523 4706 scope.go:117] "RemoveContainer" containerID="c5bd04e6883225ee113b6c78562ccd17a0785d63ef82ee46cd93be8c0817442c" Nov 25 11:56:51 crc kubenswrapper[4706]: E1125 11:56:51.694971 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5bd04e6883225ee113b6c78562ccd17a0785d63ef82ee46cd93be8c0817442c\": container with ID starting with c5bd04e6883225ee113b6c78562ccd17a0785d63ef82ee46cd93be8c0817442c not found: ID does not exist" containerID="c5bd04e6883225ee113b6c78562ccd17a0785d63ef82ee46cd93be8c0817442c" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.695023 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5bd04e6883225ee113b6c78562ccd17a0785d63ef82ee46cd93be8c0817442c"} err="failed to get container status \"c5bd04e6883225ee113b6c78562ccd17a0785d63ef82ee46cd93be8c0817442c\": rpc error: code = NotFound desc = could not find container \"c5bd04e6883225ee113b6c78562ccd17a0785d63ef82ee46cd93be8c0817442c\": container with ID starting with c5bd04e6883225ee113b6c78562ccd17a0785d63ef82ee46cd93be8c0817442c not found: ID does not exist" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.695061 4706 scope.go:117] "RemoveContainer" containerID="7200f253342a4606b46ecca291ca17699ff36d5cee3f8441314bdde5ef17f081" Nov 25 11:56:51 crc kubenswrapper[4706]: E1125 11:56:51.695414 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7200f253342a4606b46ecca291ca17699ff36d5cee3f8441314bdde5ef17f081\": container with ID starting with 7200f253342a4606b46ecca291ca17699ff36d5cee3f8441314bdde5ef17f081 not found: ID does not exist" containerID="7200f253342a4606b46ecca291ca17699ff36d5cee3f8441314bdde5ef17f081" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.695443 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7200f253342a4606b46ecca291ca17699ff36d5cee3f8441314bdde5ef17f081"} err="failed to get container status \"7200f253342a4606b46ecca291ca17699ff36d5cee3f8441314bdde5ef17f081\": rpc error: code = NotFound desc = could not find container \"7200f253342a4606b46ecca291ca17699ff36d5cee3f8441314bdde5ef17f081\": container with ID starting with 7200f253342a4606b46ecca291ca17699ff36d5cee3f8441314bdde5ef17f081 not found: ID does not exist" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.708357 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 11:56:51 crc kubenswrapper[4706]: E1125 11:56:51.708821 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e3d141e-c4bd-479f-998d-a3ecfcf87156" containerName="dnsmasq-dns" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.708845 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e3d141e-c4bd-479f-998d-a3ecfcf87156" containerName="dnsmasq-dns" Nov 25 11:56:51 crc kubenswrapper[4706]: E1125 11:56:51.708860 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fdb06a5-d894-4b1a-ae3c-34c092b4172f" containerName="barbican-api" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.708868 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fdb06a5-d894-4b1a-ae3c-34c092b4172f" containerName="barbican-api" Nov 25 11:56:51 crc kubenswrapper[4706]: E1125 11:56:51.708885 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c785321d-b637-4f3a-9e69-bc237eb1e9c2" containerName="horizon-log" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.708894 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="c785321d-b637-4f3a-9e69-bc237eb1e9c2" containerName="horizon-log" Nov 25 11:56:51 crc kubenswrapper[4706]: E1125 11:56:51.708914 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52550d3a-83c6-44fd-87bd-e14b2b6645d9" containerName="cinder-scheduler" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.708924 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="52550d3a-83c6-44fd-87bd-e14b2b6645d9" containerName="cinder-scheduler" Nov 25 11:56:51 crc kubenswrapper[4706]: E1125 11:56:51.708953 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fdb06a5-d894-4b1a-ae3c-34c092b4172f" containerName="barbican-api-log" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.708963 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fdb06a5-d894-4b1a-ae3c-34c092b4172f" containerName="barbican-api-log" Nov 25 11:56:51 crc kubenswrapper[4706]: E1125 11:56:51.708980 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52550d3a-83c6-44fd-87bd-e14b2b6645d9" containerName="probe" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.708987 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="52550d3a-83c6-44fd-87bd-e14b2b6645d9" containerName="probe" Nov 25 11:56:51 crc kubenswrapper[4706]: E1125 11:56:51.708996 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e3d141e-c4bd-479f-998d-a3ecfcf87156" containerName="init" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.709003 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e3d141e-c4bd-479f-998d-a3ecfcf87156" containerName="init" Nov 25 11:56:51 crc kubenswrapper[4706]: E1125 11:56:51.709014 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c785321d-b637-4f3a-9e69-bc237eb1e9c2" containerName="horizon" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.709022 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="c785321d-b637-4f3a-9e69-bc237eb1e9c2" containerName="horizon" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.709268 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="c785321d-b637-4f3a-9e69-bc237eb1e9c2" containerName="horizon" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.709286 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fdb06a5-d894-4b1a-ae3c-34c092b4172f" containerName="barbican-api" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.709377 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e3d141e-c4bd-479f-998d-a3ecfcf87156" containerName="dnsmasq-dns" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.709396 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="52550d3a-83c6-44fd-87bd-e14b2b6645d9" containerName="cinder-scheduler" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.709407 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="52550d3a-83c6-44fd-87bd-e14b2b6645d9" containerName="probe" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.709422 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fdb06a5-d894-4b1a-ae3c-34c092b4172f" containerName="barbican-api-log" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.709438 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="c785321d-b637-4f3a-9e69-bc237eb1e9c2" containerName="horizon-log" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.710627 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.712678 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.721910 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.901567 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnhd6\" (UniqueName: \"kubernetes.io/projected/f4dd78e0-575d-4188-b6f5-17ab8a12383c-kube-api-access-dnhd6\") pod \"cinder-scheduler-0\" (UID: \"f4dd78e0-575d-4188-b6f5-17ab8a12383c\") " pod="openstack/cinder-scheduler-0" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.901705 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4dd78e0-575d-4188-b6f5-17ab8a12383c-config-data\") pod \"cinder-scheduler-0\" (UID: \"f4dd78e0-575d-4188-b6f5-17ab8a12383c\") " pod="openstack/cinder-scheduler-0" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.901744 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4dd78e0-575d-4188-b6f5-17ab8a12383c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f4dd78e0-575d-4188-b6f5-17ab8a12383c\") " pod="openstack/cinder-scheduler-0" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.901816 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4dd78e0-575d-4188-b6f5-17ab8a12383c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f4dd78e0-575d-4188-b6f5-17ab8a12383c\") " pod="openstack/cinder-scheduler-0" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.901846 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f4dd78e0-575d-4188-b6f5-17ab8a12383c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f4dd78e0-575d-4188-b6f5-17ab8a12383c\") " pod="openstack/cinder-scheduler-0" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.901900 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4dd78e0-575d-4188-b6f5-17ab8a12383c-scripts\") pod \"cinder-scheduler-0\" (UID: \"f4dd78e0-575d-4188-b6f5-17ab8a12383c\") " pod="openstack/cinder-scheduler-0" Nov 25 11:56:51 crc kubenswrapper[4706]: I1125 11:56:51.935326 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52550d3a-83c6-44fd-87bd-e14b2b6645d9" path="/var/lib/kubelet/pods/52550d3a-83c6-44fd-87bd-e14b2b6645d9/volumes" Nov 25 11:56:52 crc kubenswrapper[4706]: I1125 11:56:52.003142 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4dd78e0-575d-4188-b6f5-17ab8a12383c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f4dd78e0-575d-4188-b6f5-17ab8a12383c\") " pod="openstack/cinder-scheduler-0" Nov 25 11:56:52 crc kubenswrapper[4706]: I1125 11:56:52.003189 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f4dd78e0-575d-4188-b6f5-17ab8a12383c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f4dd78e0-575d-4188-b6f5-17ab8a12383c\") " pod="openstack/cinder-scheduler-0" Nov 25 11:56:52 crc kubenswrapper[4706]: I1125 11:56:52.003240 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4dd78e0-575d-4188-b6f5-17ab8a12383c-scripts\") pod \"cinder-scheduler-0\" (UID: \"f4dd78e0-575d-4188-b6f5-17ab8a12383c\") " pod="openstack/cinder-scheduler-0" Nov 25 11:56:52 crc kubenswrapper[4706]: I1125 11:56:52.003277 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnhd6\" (UniqueName: \"kubernetes.io/projected/f4dd78e0-575d-4188-b6f5-17ab8a12383c-kube-api-access-dnhd6\") pod \"cinder-scheduler-0\" (UID: \"f4dd78e0-575d-4188-b6f5-17ab8a12383c\") " pod="openstack/cinder-scheduler-0" Nov 25 11:56:52 crc kubenswrapper[4706]: I1125 11:56:52.003382 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4dd78e0-575d-4188-b6f5-17ab8a12383c-config-data\") pod \"cinder-scheduler-0\" (UID: \"f4dd78e0-575d-4188-b6f5-17ab8a12383c\") " pod="openstack/cinder-scheduler-0" Nov 25 11:56:52 crc kubenswrapper[4706]: I1125 11:56:52.003412 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4dd78e0-575d-4188-b6f5-17ab8a12383c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f4dd78e0-575d-4188-b6f5-17ab8a12383c\") " pod="openstack/cinder-scheduler-0" Nov 25 11:56:52 crc kubenswrapper[4706]: I1125 11:56:52.003368 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f4dd78e0-575d-4188-b6f5-17ab8a12383c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f4dd78e0-575d-4188-b6f5-17ab8a12383c\") " pod="openstack/cinder-scheduler-0" Nov 25 11:56:52 crc kubenswrapper[4706]: I1125 11:56:52.008108 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4dd78e0-575d-4188-b6f5-17ab8a12383c-scripts\") pod \"cinder-scheduler-0\" (UID: \"f4dd78e0-575d-4188-b6f5-17ab8a12383c\") " pod="openstack/cinder-scheduler-0" Nov 25 11:56:52 crc kubenswrapper[4706]: I1125 11:56:52.012033 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4dd78e0-575d-4188-b6f5-17ab8a12383c-config-data\") pod \"cinder-scheduler-0\" (UID: \"f4dd78e0-575d-4188-b6f5-17ab8a12383c\") " pod="openstack/cinder-scheduler-0" Nov 25 11:56:52 crc kubenswrapper[4706]: I1125 11:56:52.014826 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4dd78e0-575d-4188-b6f5-17ab8a12383c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f4dd78e0-575d-4188-b6f5-17ab8a12383c\") " pod="openstack/cinder-scheduler-0" Nov 25 11:56:52 crc kubenswrapper[4706]: I1125 11:56:52.016847 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4dd78e0-575d-4188-b6f5-17ab8a12383c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f4dd78e0-575d-4188-b6f5-17ab8a12383c\") " pod="openstack/cinder-scheduler-0" Nov 25 11:56:52 crc kubenswrapper[4706]: I1125 11:56:52.025844 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnhd6\" (UniqueName: \"kubernetes.io/projected/f4dd78e0-575d-4188-b6f5-17ab8a12383c-kube-api-access-dnhd6\") pod \"cinder-scheduler-0\" (UID: \"f4dd78e0-575d-4188-b6f5-17ab8a12383c\") " pod="openstack/cinder-scheduler-0" Nov 25 11:56:52 crc kubenswrapper[4706]: I1125 11:56:52.047721 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 11:56:52 crc kubenswrapper[4706]: I1125 11:56:52.573253 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 11:56:52 crc kubenswrapper[4706]: I1125 11:56:52.652678 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f4dd78e0-575d-4188-b6f5-17ab8a12383c","Type":"ContainerStarted","Data":"43273ac9176b34d971b0b608cb2aa1a9b51388eb3f1360da9921badd37b34090"} Nov 25 11:56:53 crc kubenswrapper[4706]: I1125 11:56:53.127349 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-854bff779d-k8bjv" Nov 25 11:56:53 crc kubenswrapper[4706]: I1125 11:56:53.343057 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5bfcb97b8-lmwjc" Nov 25 11:56:53 crc kubenswrapper[4706]: I1125 11:56:53.411113 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5bfcb97b8-lmwjc" Nov 25 11:56:53 crc kubenswrapper[4706]: I1125 11:56:53.675980 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f4dd78e0-575d-4188-b6f5-17ab8a12383c","Type":"ContainerStarted","Data":"12d7595b9decdfbba5d800861a83f3f7102849602d0aa5fbcc1f9273052a96d6"} Nov 25 11:56:54 crc kubenswrapper[4706]: I1125 11:56:54.681908 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5d6465f55b-zdrth" podUID="74b33eb1-0020-4037-918c-9e747dcfd61f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Nov 25 11:56:54 crc kubenswrapper[4706]: I1125 11:56:54.687911 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f4dd78e0-575d-4188-b6f5-17ab8a12383c","Type":"ContainerStarted","Data":"112664b43cab74550c39acee8441b50f9a85de83fd682b38b08c870d8ab78bea"} Nov 25 11:56:54 crc kubenswrapper[4706]: I1125 11:56:54.721374 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.721355191 podStartE2EDuration="3.721355191s" podCreationTimestamp="2025-11-25 11:56:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:56:54.717150205 +0000 UTC m=+1223.631707586" watchObservedRunningTime="2025-11-25 11:56:54.721355191 +0000 UTC m=+1223.635912572" Nov 25 11:56:57 crc kubenswrapper[4706]: I1125 11:56:57.048062 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 25 11:56:57 crc kubenswrapper[4706]: I1125 11:56:57.468439 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 25 11:56:57 crc kubenswrapper[4706]: I1125 11:56:57.470030 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 11:56:57 crc kubenswrapper[4706]: I1125 11:56:57.478695 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 25 11:56:57 crc kubenswrapper[4706]: I1125 11:56:57.479787 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-bsbgm" Nov 25 11:56:57 crc kubenswrapper[4706]: I1125 11:56:57.479905 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 25 11:56:57 crc kubenswrapper[4706]: I1125 11:56:57.486754 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 25 11:56:57 crc kubenswrapper[4706]: I1125 11:56:57.558518 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b3907bc3-a1dd-4f84-8b85-17faee9075f1-openstack-config\") pod \"openstackclient\" (UID: \"b3907bc3-a1dd-4f84-8b85-17faee9075f1\") " pod="openstack/openstackclient" Nov 25 11:56:57 crc kubenswrapper[4706]: I1125 11:56:57.558597 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b3907bc3-a1dd-4f84-8b85-17faee9075f1-openstack-config-secret\") pod \"openstackclient\" (UID: \"b3907bc3-a1dd-4f84-8b85-17faee9075f1\") " pod="openstack/openstackclient" Nov 25 11:56:57 crc kubenswrapper[4706]: I1125 11:56:57.558687 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3907bc3-a1dd-4f84-8b85-17faee9075f1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b3907bc3-a1dd-4f84-8b85-17faee9075f1\") " pod="openstack/openstackclient" Nov 25 11:56:57 crc kubenswrapper[4706]: I1125 11:56:57.568721 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw5dw\" (UniqueName: \"kubernetes.io/projected/b3907bc3-a1dd-4f84-8b85-17faee9075f1-kube-api-access-tw5dw\") pod \"openstackclient\" (UID: \"b3907bc3-a1dd-4f84-8b85-17faee9075f1\") " pod="openstack/openstackclient" Nov 25 11:56:57 crc kubenswrapper[4706]: I1125 11:56:57.670701 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw5dw\" (UniqueName: \"kubernetes.io/projected/b3907bc3-a1dd-4f84-8b85-17faee9075f1-kube-api-access-tw5dw\") pod \"openstackclient\" (UID: \"b3907bc3-a1dd-4f84-8b85-17faee9075f1\") " pod="openstack/openstackclient" Nov 25 11:56:57 crc kubenswrapper[4706]: I1125 11:56:57.671101 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b3907bc3-a1dd-4f84-8b85-17faee9075f1-openstack-config\") pod \"openstackclient\" (UID: \"b3907bc3-a1dd-4f84-8b85-17faee9075f1\") " pod="openstack/openstackclient" Nov 25 11:56:57 crc kubenswrapper[4706]: I1125 11:56:57.671248 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b3907bc3-a1dd-4f84-8b85-17faee9075f1-openstack-config-secret\") pod \"openstackclient\" (UID: \"b3907bc3-a1dd-4f84-8b85-17faee9075f1\") " pod="openstack/openstackclient" Nov 25 11:56:57 crc kubenswrapper[4706]: I1125 11:56:57.671436 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3907bc3-a1dd-4f84-8b85-17faee9075f1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b3907bc3-a1dd-4f84-8b85-17faee9075f1\") " pod="openstack/openstackclient" Nov 25 11:56:57 crc kubenswrapper[4706]: I1125 11:56:57.672250 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b3907bc3-a1dd-4f84-8b85-17faee9075f1-openstack-config\") pod \"openstackclient\" (UID: \"b3907bc3-a1dd-4f84-8b85-17faee9075f1\") " pod="openstack/openstackclient" Nov 25 11:56:57 crc kubenswrapper[4706]: I1125 11:56:57.677072 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b3907bc3-a1dd-4f84-8b85-17faee9075f1-openstack-config-secret\") pod \"openstackclient\" (UID: \"b3907bc3-a1dd-4f84-8b85-17faee9075f1\") " pod="openstack/openstackclient" Nov 25 11:56:57 crc kubenswrapper[4706]: I1125 11:56:57.679860 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3907bc3-a1dd-4f84-8b85-17faee9075f1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b3907bc3-a1dd-4f84-8b85-17faee9075f1\") " pod="openstack/openstackclient" Nov 25 11:56:57 crc kubenswrapper[4706]: I1125 11:56:57.690365 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw5dw\" (UniqueName: \"kubernetes.io/projected/b3907bc3-a1dd-4f84-8b85-17faee9075f1-kube-api-access-tw5dw\") pod \"openstackclient\" (UID: \"b3907bc3-a1dd-4f84-8b85-17faee9075f1\") " pod="openstack/openstackclient" Nov 25 11:56:57 crc kubenswrapper[4706]: I1125 11:56:57.834373 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 25 11:56:57 crc kubenswrapper[4706]: I1125 11:56:57.835191 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 11:56:57 crc kubenswrapper[4706]: I1125 11:56:57.858554 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 25 11:56:57 crc kubenswrapper[4706]: I1125 11:56:57.882474 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 25 11:56:57 crc kubenswrapper[4706]: I1125 11:56:57.884198 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 11:56:57 crc kubenswrapper[4706]: I1125 11:56:57.892917 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 25 11:56:57 crc kubenswrapper[4706]: I1125 11:56:57.976561 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b8a85f10-0dcd-42f8-a4bc-f0b25f59cfe8-openstack-config\") pod \"openstackclient\" (UID: \"b8a85f10-0dcd-42f8-a4bc-f0b25f59cfe8\") " pod="openstack/openstackclient" Nov 25 11:56:57 crc kubenswrapper[4706]: I1125 11:56:57.976663 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkd5w\" (UniqueName: \"kubernetes.io/projected/b8a85f10-0dcd-42f8-a4bc-f0b25f59cfe8-kube-api-access-bkd5w\") pod \"openstackclient\" (UID: \"b8a85f10-0dcd-42f8-a4bc-f0b25f59cfe8\") " pod="openstack/openstackclient" Nov 25 11:56:57 crc kubenswrapper[4706]: I1125 11:56:57.976786 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8a85f10-0dcd-42f8-a4bc-f0b25f59cfe8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b8a85f10-0dcd-42f8-a4bc-f0b25f59cfe8\") " pod="openstack/openstackclient" Nov 25 11:56:57 crc kubenswrapper[4706]: I1125 11:56:57.976834 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b8a85f10-0dcd-42f8-a4bc-f0b25f59cfe8-openstack-config-secret\") pod \"openstackclient\" (UID: \"b8a85f10-0dcd-42f8-a4bc-f0b25f59cfe8\") " pod="openstack/openstackclient" Nov 25 11:56:57 crc kubenswrapper[4706]: E1125 11:56:57.992155 4706 log.go:32] "RunPodSandbox from runtime service failed" err=< Nov 25 11:56:57 crc kubenswrapper[4706]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_b3907bc3-a1dd-4f84-8b85-17faee9075f1_0(4b6c446dbd1f8ef15a8b2ea0262c8b26f823f2afa3f8fcf1765c7700ea9ff265): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4b6c446dbd1f8ef15a8b2ea0262c8b26f823f2afa3f8fcf1765c7700ea9ff265" Netns:"/var/run/netns/c92173c9-eab7-46e4-8b57-28272af99a3d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=4b6c446dbd1f8ef15a8b2ea0262c8b26f823f2afa3f8fcf1765c7700ea9ff265;K8S_POD_UID=b3907bc3-a1dd-4f84-8b85-17faee9075f1" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/b3907bc3-a1dd-4f84-8b85-17faee9075f1]: expected pod UID "b3907bc3-a1dd-4f84-8b85-17faee9075f1" but got "b8a85f10-0dcd-42f8-a4bc-f0b25f59cfe8" from Kube API Nov 25 11:56:57 crc kubenswrapper[4706]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 11:56:57 crc kubenswrapper[4706]: > Nov 25 11:56:57 crc kubenswrapper[4706]: E1125 11:56:57.992267 4706 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Nov 25 11:56:57 crc kubenswrapper[4706]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_b3907bc3-a1dd-4f84-8b85-17faee9075f1_0(4b6c446dbd1f8ef15a8b2ea0262c8b26f823f2afa3f8fcf1765c7700ea9ff265): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4b6c446dbd1f8ef15a8b2ea0262c8b26f823f2afa3f8fcf1765c7700ea9ff265" Netns:"/var/run/netns/c92173c9-eab7-46e4-8b57-28272af99a3d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=4b6c446dbd1f8ef15a8b2ea0262c8b26f823f2afa3f8fcf1765c7700ea9ff265;K8S_POD_UID=b3907bc3-a1dd-4f84-8b85-17faee9075f1" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/b3907bc3-a1dd-4f84-8b85-17faee9075f1]: expected pod UID "b3907bc3-a1dd-4f84-8b85-17faee9075f1" but got "b8a85f10-0dcd-42f8-a4bc-f0b25f59cfe8" from Kube API Nov 25 11:56:57 crc kubenswrapper[4706]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 11:56:57 crc kubenswrapper[4706]: > pod="openstack/openstackclient" Nov 25 11:56:58 crc kubenswrapper[4706]: I1125 11:56:58.079916 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b8a85f10-0dcd-42f8-a4bc-f0b25f59cfe8-openstack-config\") pod \"openstackclient\" (UID: \"b8a85f10-0dcd-42f8-a4bc-f0b25f59cfe8\") " pod="openstack/openstackclient" Nov 25 11:56:58 crc kubenswrapper[4706]: I1125 11:56:58.080061 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkd5w\" (UniqueName: \"kubernetes.io/projected/b8a85f10-0dcd-42f8-a4bc-f0b25f59cfe8-kube-api-access-bkd5w\") pod \"openstackclient\" (UID: \"b8a85f10-0dcd-42f8-a4bc-f0b25f59cfe8\") " pod="openstack/openstackclient" Nov 25 11:56:58 crc kubenswrapper[4706]: I1125 11:56:58.080232 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8a85f10-0dcd-42f8-a4bc-f0b25f59cfe8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b8a85f10-0dcd-42f8-a4bc-f0b25f59cfe8\") " pod="openstack/openstackclient" Nov 25 11:56:58 crc kubenswrapper[4706]: I1125 11:56:58.080287 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b8a85f10-0dcd-42f8-a4bc-f0b25f59cfe8-openstack-config-secret\") pod \"openstackclient\" (UID: \"b8a85f10-0dcd-42f8-a4bc-f0b25f59cfe8\") " pod="openstack/openstackclient" Nov 25 11:56:58 crc kubenswrapper[4706]: I1125 11:56:58.080967 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b8a85f10-0dcd-42f8-a4bc-f0b25f59cfe8-openstack-config\") pod \"openstackclient\" (UID: \"b8a85f10-0dcd-42f8-a4bc-f0b25f59cfe8\") " pod="openstack/openstackclient" Nov 25 11:56:58 crc kubenswrapper[4706]: I1125 11:56:58.086719 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b8a85f10-0dcd-42f8-a4bc-f0b25f59cfe8-openstack-config-secret\") pod \"openstackclient\" (UID: \"b8a85f10-0dcd-42f8-a4bc-f0b25f59cfe8\") " pod="openstack/openstackclient" Nov 25 11:56:58 crc kubenswrapper[4706]: I1125 11:56:58.087993 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8a85f10-0dcd-42f8-a4bc-f0b25f59cfe8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b8a85f10-0dcd-42f8-a4bc-f0b25f59cfe8\") " pod="openstack/openstackclient" Nov 25 11:56:58 crc kubenswrapper[4706]: I1125 11:56:58.103041 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkd5w\" (UniqueName: \"kubernetes.io/projected/b8a85f10-0dcd-42f8-a4bc-f0b25f59cfe8-kube-api-access-bkd5w\") pod \"openstackclient\" (UID: \"b8a85f10-0dcd-42f8-a4bc-f0b25f59cfe8\") " pod="openstack/openstackclient" Nov 25 11:56:58 crc kubenswrapper[4706]: I1125 11:56:58.257091 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 11:56:58 crc kubenswrapper[4706]: I1125 11:56:58.727091 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 11:56:58 crc kubenswrapper[4706]: I1125 11:56:58.727538 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 25 11:56:58 crc kubenswrapper[4706]: W1125 11:56:58.746727 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8a85f10_0dcd_42f8_a4bc_f0b25f59cfe8.slice/crio-b81ca529cff1dbbd3e91b7a338226cb835731f7391349ca8776e4efc703cd737 WatchSource:0}: Error finding container b81ca529cff1dbbd3e91b7a338226cb835731f7391349ca8776e4efc703cd737: Status 404 returned error can't find the container with id b81ca529cff1dbbd3e91b7a338226cb835731f7391349ca8776e4efc703cd737 Nov 25 11:56:58 crc kubenswrapper[4706]: I1125 11:56:58.808786 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 11:56:58 crc kubenswrapper[4706]: I1125 11:56:58.812414 4706 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="b3907bc3-a1dd-4f84-8b85-17faee9075f1" podUID="b8a85f10-0dcd-42f8-a4bc-f0b25f59cfe8" Nov 25 11:56:58 crc kubenswrapper[4706]: I1125 11:56:58.898482 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3907bc3-a1dd-4f84-8b85-17faee9075f1-combined-ca-bundle\") pod \"b3907bc3-a1dd-4f84-8b85-17faee9075f1\" (UID: \"b3907bc3-a1dd-4f84-8b85-17faee9075f1\") " Nov 25 11:56:58 crc kubenswrapper[4706]: I1125 11:56:58.898642 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b3907bc3-a1dd-4f84-8b85-17faee9075f1-openstack-config-secret\") pod \"b3907bc3-a1dd-4f84-8b85-17faee9075f1\" (UID: \"b3907bc3-a1dd-4f84-8b85-17faee9075f1\") " Nov 25 11:56:58 crc kubenswrapper[4706]: I1125 11:56:58.898704 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b3907bc3-a1dd-4f84-8b85-17faee9075f1-openstack-config\") pod \"b3907bc3-a1dd-4f84-8b85-17faee9075f1\" (UID: \"b3907bc3-a1dd-4f84-8b85-17faee9075f1\") " Nov 25 11:56:58 crc kubenswrapper[4706]: I1125 11:56:58.898730 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tw5dw\" (UniqueName: \"kubernetes.io/projected/b3907bc3-a1dd-4f84-8b85-17faee9075f1-kube-api-access-tw5dw\") pod \"b3907bc3-a1dd-4f84-8b85-17faee9075f1\" (UID: \"b3907bc3-a1dd-4f84-8b85-17faee9075f1\") " Nov 25 11:56:58 crc kubenswrapper[4706]: I1125 11:56:58.899322 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3907bc3-a1dd-4f84-8b85-17faee9075f1-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "b3907bc3-a1dd-4f84-8b85-17faee9075f1" (UID: "b3907bc3-a1dd-4f84-8b85-17faee9075f1"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:56:58 crc kubenswrapper[4706]: I1125 11:56:58.903803 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3907bc3-a1dd-4f84-8b85-17faee9075f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3907bc3-a1dd-4f84-8b85-17faee9075f1" (UID: "b3907bc3-a1dd-4f84-8b85-17faee9075f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:58 crc kubenswrapper[4706]: I1125 11:56:58.905858 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3907bc3-a1dd-4f84-8b85-17faee9075f1-kube-api-access-tw5dw" (OuterVolumeSpecName: "kube-api-access-tw5dw") pod "b3907bc3-a1dd-4f84-8b85-17faee9075f1" (UID: "b3907bc3-a1dd-4f84-8b85-17faee9075f1"). InnerVolumeSpecName "kube-api-access-tw5dw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:56:58 crc kubenswrapper[4706]: I1125 11:56:58.915551 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3907bc3-a1dd-4f84-8b85-17faee9075f1-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "b3907bc3-a1dd-4f84-8b85-17faee9075f1" (UID: "b3907bc3-a1dd-4f84-8b85-17faee9075f1"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:56:58 crc kubenswrapper[4706]: I1125 11:56:58.955895 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-65d9589979-xw964"] Nov 25 11:56:58 crc kubenswrapper[4706]: I1125 11:56:58.957498 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-65d9589979-xw964" Nov 25 11:56:58 crc kubenswrapper[4706]: I1125 11:56:58.961261 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 25 11:56:58 crc kubenswrapper[4706]: I1125 11:56:58.961603 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 25 11:56:58 crc kubenswrapper[4706]: I1125 11:56:58.961827 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 25 11:56:58 crc kubenswrapper[4706]: I1125 11:56:58.969096 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-65d9589979-xw964"] Nov 25 11:56:59 crc kubenswrapper[4706]: I1125 11:56:59.001971 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64d9e8db-d554-4623-9a76-719df27fffef-config-data\") pod \"swift-proxy-65d9589979-xw964\" (UID: \"64d9e8db-d554-4623-9a76-719df27fffef\") " pod="openstack/swift-proxy-65d9589979-xw964" Nov 25 11:56:59 crc kubenswrapper[4706]: I1125 11:56:59.002391 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64d9e8db-d554-4623-9a76-719df27fffef-log-httpd\") pod \"swift-proxy-65d9589979-xw964\" (UID: \"64d9e8db-d554-4623-9a76-719df27fffef\") " pod="openstack/swift-proxy-65d9589979-xw964" Nov 25 11:56:59 crc kubenswrapper[4706]: I1125 11:56:59.002438 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64d9e8db-d554-4623-9a76-719df27fffef-combined-ca-bundle\") pod \"swift-proxy-65d9589979-xw964\" (UID: \"64d9e8db-d554-4623-9a76-719df27fffef\") " pod="openstack/swift-proxy-65d9589979-xw964" Nov 25 11:56:59 crc kubenswrapper[4706]: I1125 11:56:59.002632 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl9wb\" (UniqueName: \"kubernetes.io/projected/64d9e8db-d554-4623-9a76-719df27fffef-kube-api-access-cl9wb\") pod \"swift-proxy-65d9589979-xw964\" (UID: \"64d9e8db-d554-4623-9a76-719df27fffef\") " pod="openstack/swift-proxy-65d9589979-xw964" Nov 25 11:56:59 crc kubenswrapper[4706]: I1125 11:56:59.003327 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/64d9e8db-d554-4623-9a76-719df27fffef-internal-tls-certs\") pod \"swift-proxy-65d9589979-xw964\" (UID: \"64d9e8db-d554-4623-9a76-719df27fffef\") " pod="openstack/swift-proxy-65d9589979-xw964" Nov 25 11:56:59 crc kubenswrapper[4706]: I1125 11:56:59.004537 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64d9e8db-d554-4623-9a76-719df27fffef-public-tls-certs\") pod \"swift-proxy-65d9589979-xw964\" (UID: \"64d9e8db-d554-4623-9a76-719df27fffef\") " pod="openstack/swift-proxy-65d9589979-xw964" Nov 25 11:56:59 crc kubenswrapper[4706]: I1125 11:56:59.004584 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64d9e8db-d554-4623-9a76-719df27fffef-run-httpd\") pod \"swift-proxy-65d9589979-xw964\" (UID: \"64d9e8db-d554-4623-9a76-719df27fffef\") " pod="openstack/swift-proxy-65d9589979-xw964" Nov 25 11:56:59 crc kubenswrapper[4706]: I1125 11:56:59.004756 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/64d9e8db-d554-4623-9a76-719df27fffef-etc-swift\") pod \"swift-proxy-65d9589979-xw964\" (UID: \"64d9e8db-d554-4623-9a76-719df27fffef\") " pod="openstack/swift-proxy-65d9589979-xw964" Nov 25 11:56:59 crc kubenswrapper[4706]: I1125 11:56:59.005028 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3907bc3-a1dd-4f84-8b85-17faee9075f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:59 crc kubenswrapper[4706]: I1125 11:56:59.005051 4706 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b3907bc3-a1dd-4f84-8b85-17faee9075f1-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:59 crc kubenswrapper[4706]: I1125 11:56:59.005061 4706 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b3907bc3-a1dd-4f84-8b85-17faee9075f1-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:59 crc kubenswrapper[4706]: I1125 11:56:59.005069 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tw5dw\" (UniqueName: \"kubernetes.io/projected/b3907bc3-a1dd-4f84-8b85-17faee9075f1-kube-api-access-tw5dw\") on node \"crc\" DevicePath \"\"" Nov 25 11:56:59 crc kubenswrapper[4706]: I1125 11:56:59.106484 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64d9e8db-d554-4623-9a76-719df27fffef-run-httpd\") pod \"swift-proxy-65d9589979-xw964\" (UID: \"64d9e8db-d554-4623-9a76-719df27fffef\") " pod="openstack/swift-proxy-65d9589979-xw964" Nov 25 11:56:59 crc kubenswrapper[4706]: I1125 11:56:59.106559 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/64d9e8db-d554-4623-9a76-719df27fffef-etc-swift\") pod \"swift-proxy-65d9589979-xw964\" (UID: \"64d9e8db-d554-4623-9a76-719df27fffef\") " pod="openstack/swift-proxy-65d9589979-xw964" Nov 25 11:56:59 crc kubenswrapper[4706]: I1125 11:56:59.106619 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64d9e8db-d554-4623-9a76-719df27fffef-config-data\") pod \"swift-proxy-65d9589979-xw964\" (UID: \"64d9e8db-d554-4623-9a76-719df27fffef\") " pod="openstack/swift-proxy-65d9589979-xw964" Nov 25 11:56:59 crc kubenswrapper[4706]: I1125 11:56:59.106655 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64d9e8db-d554-4623-9a76-719df27fffef-combined-ca-bundle\") pod \"swift-proxy-65d9589979-xw964\" (UID: \"64d9e8db-d554-4623-9a76-719df27fffef\") " pod="openstack/swift-proxy-65d9589979-xw964" Nov 25 11:56:59 crc kubenswrapper[4706]: I1125 11:56:59.106690 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64d9e8db-d554-4623-9a76-719df27fffef-log-httpd\") pod \"swift-proxy-65d9589979-xw964\" (UID: \"64d9e8db-d554-4623-9a76-719df27fffef\") " pod="openstack/swift-proxy-65d9589979-xw964" Nov 25 11:56:59 crc kubenswrapper[4706]: I1125 11:56:59.106733 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl9wb\" (UniqueName: \"kubernetes.io/projected/64d9e8db-d554-4623-9a76-719df27fffef-kube-api-access-cl9wb\") pod \"swift-proxy-65d9589979-xw964\" (UID: \"64d9e8db-d554-4623-9a76-719df27fffef\") " pod="openstack/swift-proxy-65d9589979-xw964" Nov 25 11:56:59 crc kubenswrapper[4706]: I1125 11:56:59.106837 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/64d9e8db-d554-4623-9a76-719df27fffef-internal-tls-certs\") pod \"swift-proxy-65d9589979-xw964\" (UID: \"64d9e8db-d554-4623-9a76-719df27fffef\") " pod="openstack/swift-proxy-65d9589979-xw964" Nov 25 11:56:59 crc kubenswrapper[4706]: I1125 11:56:59.106886 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64d9e8db-d554-4623-9a76-719df27fffef-public-tls-certs\") pod \"swift-proxy-65d9589979-xw964\" (UID: \"64d9e8db-d554-4623-9a76-719df27fffef\") " pod="openstack/swift-proxy-65d9589979-xw964" Nov 25 11:56:59 crc kubenswrapper[4706]: I1125 11:56:59.107278 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64d9e8db-d554-4623-9a76-719df27fffef-log-httpd\") pod \"swift-proxy-65d9589979-xw964\" (UID: \"64d9e8db-d554-4623-9a76-719df27fffef\") " pod="openstack/swift-proxy-65d9589979-xw964" Nov 25 11:56:59 crc kubenswrapper[4706]: I1125 11:56:59.107356 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64d9e8db-d554-4623-9a76-719df27fffef-run-httpd\") pod \"swift-proxy-65d9589979-xw964\" (UID: \"64d9e8db-d554-4623-9a76-719df27fffef\") " pod="openstack/swift-proxy-65d9589979-xw964" Nov 25 11:56:59 crc kubenswrapper[4706]: I1125 11:56:59.111564 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/64d9e8db-d554-4623-9a76-719df27fffef-internal-tls-certs\") pod \"swift-proxy-65d9589979-xw964\" (UID: \"64d9e8db-d554-4623-9a76-719df27fffef\") " pod="openstack/swift-proxy-65d9589979-xw964" Nov 25 11:56:59 crc kubenswrapper[4706]: I1125 11:56:59.112693 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/64d9e8db-d554-4623-9a76-719df27fffef-etc-swift\") pod \"swift-proxy-65d9589979-xw964\" (UID: \"64d9e8db-d554-4623-9a76-719df27fffef\") " pod="openstack/swift-proxy-65d9589979-xw964" Nov 25 11:56:59 crc kubenswrapper[4706]: I1125 11:56:59.113028 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64d9e8db-d554-4623-9a76-719df27fffef-config-data\") pod \"swift-proxy-65d9589979-xw964\" (UID: \"64d9e8db-d554-4623-9a76-719df27fffef\") " pod="openstack/swift-proxy-65d9589979-xw964" Nov 25 11:56:59 crc kubenswrapper[4706]: I1125 11:56:59.133734 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64d9e8db-d554-4623-9a76-719df27fffef-combined-ca-bundle\") pod \"swift-proxy-65d9589979-xw964\" (UID: \"64d9e8db-d554-4623-9a76-719df27fffef\") " pod="openstack/swift-proxy-65d9589979-xw964" Nov 25 11:56:59 crc kubenswrapper[4706]: I1125 11:56:59.134420 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl9wb\" (UniqueName: \"kubernetes.io/projected/64d9e8db-d554-4623-9a76-719df27fffef-kube-api-access-cl9wb\") pod \"swift-proxy-65d9589979-xw964\" (UID: \"64d9e8db-d554-4623-9a76-719df27fffef\") " pod="openstack/swift-proxy-65d9589979-xw964" Nov 25 11:56:59 crc kubenswrapper[4706]: I1125 11:56:59.147816 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64d9e8db-d554-4623-9a76-719df27fffef-public-tls-certs\") pod \"swift-proxy-65d9589979-xw964\" (UID: \"64d9e8db-d554-4623-9a76-719df27fffef\") " pod="openstack/swift-proxy-65d9589979-xw964" Nov 25 11:56:59 crc kubenswrapper[4706]: I1125 11:56:59.315238 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-65d9589979-xw964" Nov 25 11:56:59 crc kubenswrapper[4706]: I1125 11:56:59.742527 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 11:56:59 crc kubenswrapper[4706]: I1125 11:56:59.743957 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b8a85f10-0dcd-42f8-a4bc-f0b25f59cfe8","Type":"ContainerStarted","Data":"b81ca529cff1dbbd3e91b7a338226cb835731f7391349ca8776e4efc703cd737"} Nov 25 11:56:59 crc kubenswrapper[4706]: I1125 11:56:59.770437 4706 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="b3907bc3-a1dd-4f84-8b85-17faee9075f1" podUID="b8a85f10-0dcd-42f8-a4bc-f0b25f59cfe8" Nov 25 11:56:59 crc kubenswrapper[4706]: E1125 11:56:59.805973 4706 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3907bc3_a1dd_4f84_8b85_17faee9075f1.slice\": RecentStats: unable to find data in memory cache]" Nov 25 11:56:59 crc kubenswrapper[4706]: I1125 11:56:59.933259 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3907bc3-a1dd-4f84-8b85-17faee9075f1" path="/var/lib/kubelet/pods/b3907bc3-a1dd-4f84-8b85-17faee9075f1/volumes" Nov 25 11:57:00 crc kubenswrapper[4706]: I1125 11:56:59.999424 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-65d9589979-xw964"] Nov 25 11:57:00 crc kubenswrapper[4706]: W1125 11:57:00.016922 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64d9e8db_d554_4623_9a76_719df27fffef.slice/crio-dd1c8c379bef42590ddbcdedf2fc1e1aed8816924e8b15f0410024bcec38340a WatchSource:0}: Error finding container dd1c8c379bef42590ddbcdedf2fc1e1aed8816924e8b15f0410024bcec38340a: Status 404 returned error can't find the container with id dd1c8c379bef42590ddbcdedf2fc1e1aed8816924e8b15f0410024bcec38340a Nov 25 11:57:00 crc kubenswrapper[4706]: I1125 11:57:00.284424 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:57:00 crc kubenswrapper[4706]: I1125 11:57:00.285130 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4edea425-7eb5-458b-8e80-3e04fe787998" containerName="ceilometer-central-agent" containerID="cri-o://9939373b15134b1719de1987d50545c2cfa39a6d3e179e0ca908a425e5b68532" gracePeriod=30 Nov 25 11:57:00 crc kubenswrapper[4706]: I1125 11:57:00.286076 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4edea425-7eb5-458b-8e80-3e04fe787998" containerName="sg-core" containerID="cri-o://a67668dcfd526e20e133d248873ac04478998be08ad46cdeabc6b648780977cf" gracePeriod=30 Nov 25 11:57:00 crc kubenswrapper[4706]: I1125 11:57:00.286367 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4edea425-7eb5-458b-8e80-3e04fe787998" containerName="proxy-httpd" containerID="cri-o://da8d8f9f30a6576ef3b63ec5f392f77d70f616bc32d763fc3d425cf0de901590" gracePeriod=30 Nov 25 11:57:00 crc kubenswrapper[4706]: I1125 11:57:00.286437 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4edea425-7eb5-458b-8e80-3e04fe787998" containerName="ceilometer-notification-agent" containerID="cri-o://47a97f3bdaec06814548cc758ac0496c95daf22a53f74fb0ef28b454eb733c97" gracePeriod=30 Nov 25 11:57:00 crc kubenswrapper[4706]: I1125 11:57:00.291921 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 25 11:57:00 crc kubenswrapper[4706]: I1125 11:57:00.756665 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-65d9589979-xw964" event={"ID":"64d9e8db-d554-4623-9a76-719df27fffef","Type":"ContainerStarted","Data":"8bb96d0e626dfad421c601df80b10297c161e1a080b39081f13ced08c7bad62f"} Nov 25 11:57:00 crc kubenswrapper[4706]: I1125 11:57:00.757024 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-65d9589979-xw964" event={"ID":"64d9e8db-d554-4623-9a76-719df27fffef","Type":"ContainerStarted","Data":"dd1c8c379bef42590ddbcdedf2fc1e1aed8816924e8b15f0410024bcec38340a"} Nov 25 11:57:00 crc kubenswrapper[4706]: I1125 11:57:00.759869 4706 generic.go:334] "Generic (PLEG): container finished" podID="4edea425-7eb5-458b-8e80-3e04fe787998" containerID="da8d8f9f30a6576ef3b63ec5f392f77d70f616bc32d763fc3d425cf0de901590" exitCode=0 Nov 25 11:57:00 crc kubenswrapper[4706]: I1125 11:57:00.759902 4706 generic.go:334] "Generic (PLEG): container finished" podID="4edea425-7eb5-458b-8e80-3e04fe787998" containerID="a67668dcfd526e20e133d248873ac04478998be08ad46cdeabc6b648780977cf" exitCode=2 Nov 25 11:57:00 crc kubenswrapper[4706]: I1125 11:57:00.759912 4706 generic.go:334] "Generic (PLEG): container finished" podID="4edea425-7eb5-458b-8e80-3e04fe787998" containerID="9939373b15134b1719de1987d50545c2cfa39a6d3e179e0ca908a425e5b68532" exitCode=0 Nov 25 11:57:00 crc kubenswrapper[4706]: I1125 11:57:00.759945 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4edea425-7eb5-458b-8e80-3e04fe787998","Type":"ContainerDied","Data":"da8d8f9f30a6576ef3b63ec5f392f77d70f616bc32d763fc3d425cf0de901590"} Nov 25 11:57:00 crc kubenswrapper[4706]: I1125 11:57:00.760031 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4edea425-7eb5-458b-8e80-3e04fe787998","Type":"ContainerDied","Data":"a67668dcfd526e20e133d248873ac04478998be08ad46cdeabc6b648780977cf"} Nov 25 11:57:00 crc kubenswrapper[4706]: I1125 11:57:00.760048 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4edea425-7eb5-458b-8e80-3e04fe787998","Type":"ContainerDied","Data":"9939373b15134b1719de1987d50545c2cfa39a6d3e179e0ca908a425e5b68532"} Nov 25 11:57:01 crc kubenswrapper[4706]: I1125 11:57:01.771084 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-65d9589979-xw964" event={"ID":"64d9e8db-d554-4623-9a76-719df27fffef","Type":"ContainerStarted","Data":"57d85f21a3b69224fc2c00c9a49a6ee80bb5bd7f8daaa31303cfa3aa8909be8b"} Nov 25 11:57:01 crc kubenswrapper[4706]: I1125 11:57:01.772910 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-65d9589979-xw964" Nov 25 11:57:01 crc kubenswrapper[4706]: I1125 11:57:01.772953 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-65d9589979-xw964" Nov 25 11:57:01 crc kubenswrapper[4706]: I1125 11:57:01.800389 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-65d9589979-xw964" podStartSLOduration=3.800364579 podStartE2EDuration="3.800364579s" podCreationTimestamp="2025-11-25 11:56:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:57:01.79084933 +0000 UTC m=+1230.705406711" watchObservedRunningTime="2025-11-25 11:57:01.800364579 +0000 UTC m=+1230.714921970" Nov 25 11:57:02 crc kubenswrapper[4706]: I1125 11:57:02.012752 4706 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod424f303d-41b7-4fd6-be4a-017148ed95da"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod424f303d-41b7-4fd6-be4a-017148ed95da] : Timed out while waiting for systemd to remove kubepods-besteffort-pod424f303d_41b7_4fd6_be4a_017148ed95da.slice" Nov 25 11:57:02 crc kubenswrapper[4706]: I1125 11:57:02.350015 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 25 11:57:02 crc kubenswrapper[4706]: I1125 11:57:02.498103 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-779dc76bb8-fwppw" Nov 25 11:57:03 crc kubenswrapper[4706]: I1125 11:57:03.785823 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 11:57:03 crc kubenswrapper[4706]: I1125 11:57:03.791892 4706 generic.go:334] "Generic (PLEG): container finished" podID="4edea425-7eb5-458b-8e80-3e04fe787998" containerID="47a97f3bdaec06814548cc758ac0496c95daf22a53f74fb0ef28b454eb733c97" exitCode=0 Nov 25 11:57:03 crc kubenswrapper[4706]: I1125 11:57:03.791942 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4edea425-7eb5-458b-8e80-3e04fe787998","Type":"ContainerDied","Data":"47a97f3bdaec06814548cc758ac0496c95daf22a53f74fb0ef28b454eb733c97"} Nov 25 11:57:03 crc kubenswrapper[4706]: I1125 11:57:03.791991 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4edea425-7eb5-458b-8e80-3e04fe787998","Type":"ContainerDied","Data":"c97d8c07a84b25192fe7846c3bb693d44d48fe82befc96c781f5f9d4db45db19"} Nov 25 11:57:03 crc kubenswrapper[4706]: I1125 11:57:03.792013 4706 scope.go:117] "RemoveContainer" containerID="da8d8f9f30a6576ef3b63ec5f392f77d70f616bc32d763fc3d425cf0de901590" Nov 25 11:57:03 crc kubenswrapper[4706]: I1125 11:57:03.832655 4706 scope.go:117] "RemoveContainer" containerID="a67668dcfd526e20e133d248873ac04478998be08ad46cdeabc6b648780977cf" Nov 25 11:57:03 crc kubenswrapper[4706]: I1125 11:57:03.874566 4706 scope.go:117] "RemoveContainer" containerID="47a97f3bdaec06814548cc758ac0496c95daf22a53f74fb0ef28b454eb733c97" Nov 25 11:57:03 crc kubenswrapper[4706]: I1125 11:57:03.897490 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4edea425-7eb5-458b-8e80-3e04fe787998-combined-ca-bundle\") pod \"4edea425-7eb5-458b-8e80-3e04fe787998\" (UID: \"4edea425-7eb5-458b-8e80-3e04fe787998\") " Nov 25 11:57:03 crc kubenswrapper[4706]: I1125 11:57:03.897765 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4edea425-7eb5-458b-8e80-3e04fe787998-log-httpd\") pod \"4edea425-7eb5-458b-8e80-3e04fe787998\" (UID: \"4edea425-7eb5-458b-8e80-3e04fe787998\") " Nov 25 11:57:03 crc kubenswrapper[4706]: I1125 11:57:03.897902 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4edea425-7eb5-458b-8e80-3e04fe787998-sg-core-conf-yaml\") pod \"4edea425-7eb5-458b-8e80-3e04fe787998\" (UID: \"4edea425-7eb5-458b-8e80-3e04fe787998\") " Nov 25 11:57:03 crc kubenswrapper[4706]: I1125 11:57:03.897942 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4edea425-7eb5-458b-8e80-3e04fe787998-run-httpd\") pod \"4edea425-7eb5-458b-8e80-3e04fe787998\" (UID: \"4edea425-7eb5-458b-8e80-3e04fe787998\") " Nov 25 11:57:03 crc kubenswrapper[4706]: I1125 11:57:03.898011 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4edea425-7eb5-458b-8e80-3e04fe787998-config-data\") pod \"4edea425-7eb5-458b-8e80-3e04fe787998\" (UID: \"4edea425-7eb5-458b-8e80-3e04fe787998\") " Nov 25 11:57:03 crc kubenswrapper[4706]: I1125 11:57:03.898057 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4edea425-7eb5-458b-8e80-3e04fe787998-scripts\") pod \"4edea425-7eb5-458b-8e80-3e04fe787998\" (UID: \"4edea425-7eb5-458b-8e80-3e04fe787998\") " Nov 25 11:57:03 crc kubenswrapper[4706]: I1125 11:57:03.898109 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dkqc\" (UniqueName: \"kubernetes.io/projected/4edea425-7eb5-458b-8e80-3e04fe787998-kube-api-access-2dkqc\") pod \"4edea425-7eb5-458b-8e80-3e04fe787998\" (UID: \"4edea425-7eb5-458b-8e80-3e04fe787998\") " Nov 25 11:57:03 crc kubenswrapper[4706]: I1125 11:57:03.898810 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4edea425-7eb5-458b-8e80-3e04fe787998-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4edea425-7eb5-458b-8e80-3e04fe787998" (UID: "4edea425-7eb5-458b-8e80-3e04fe787998"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:57:03 crc kubenswrapper[4706]: I1125 11:57:03.898903 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4edea425-7eb5-458b-8e80-3e04fe787998-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4edea425-7eb5-458b-8e80-3e04fe787998" (UID: "4edea425-7eb5-458b-8e80-3e04fe787998"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:57:03 crc kubenswrapper[4706]: I1125 11:57:03.899491 4706 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4edea425-7eb5-458b-8e80-3e04fe787998-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:03 crc kubenswrapper[4706]: I1125 11:57:03.899526 4706 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4edea425-7eb5-458b-8e80-3e04fe787998-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:03 crc kubenswrapper[4706]: I1125 11:57:03.905104 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4edea425-7eb5-458b-8e80-3e04fe787998-scripts" (OuterVolumeSpecName: "scripts") pod "4edea425-7eb5-458b-8e80-3e04fe787998" (UID: "4edea425-7eb5-458b-8e80-3e04fe787998"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:57:03 crc kubenswrapper[4706]: I1125 11:57:03.918545 4706 scope.go:117] "RemoveContainer" containerID="9939373b15134b1719de1987d50545c2cfa39a6d3e179e0ca908a425e5b68532" Nov 25 11:57:03 crc kubenswrapper[4706]: I1125 11:57:03.920838 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4edea425-7eb5-458b-8e80-3e04fe787998-kube-api-access-2dkqc" (OuterVolumeSpecName: "kube-api-access-2dkqc") pod "4edea425-7eb5-458b-8e80-3e04fe787998" (UID: "4edea425-7eb5-458b-8e80-3e04fe787998"). InnerVolumeSpecName "kube-api-access-2dkqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:57:03 crc kubenswrapper[4706]: I1125 11:57:03.936141 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4edea425-7eb5-458b-8e80-3e04fe787998-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4edea425-7eb5-458b-8e80-3e04fe787998" (UID: "4edea425-7eb5-458b-8e80-3e04fe787998"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.003779 4706 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4edea425-7eb5-458b-8e80-3e04fe787998-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.004144 4706 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4edea425-7eb5-458b-8e80-3e04fe787998-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.004505 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dkqc\" (UniqueName: \"kubernetes.io/projected/4edea425-7eb5-458b-8e80-3e04fe787998-kube-api-access-2dkqc\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.017888 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4edea425-7eb5-458b-8e80-3e04fe787998-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4edea425-7eb5-458b-8e80-3e04fe787998" (UID: "4edea425-7eb5-458b-8e80-3e04fe787998"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.063553 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4edea425-7eb5-458b-8e80-3e04fe787998-config-data" (OuterVolumeSpecName: "config-data") pod "4edea425-7eb5-458b-8e80-3e04fe787998" (UID: "4edea425-7eb5-458b-8e80-3e04fe787998"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.110782 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4edea425-7eb5-458b-8e80-3e04fe787998-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.110840 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4edea425-7eb5-458b-8e80-3e04fe787998-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.150222 4706 scope.go:117] "RemoveContainer" containerID="da8d8f9f30a6576ef3b63ec5f392f77d70f616bc32d763fc3d425cf0de901590" Nov 25 11:57:04 crc kubenswrapper[4706]: E1125 11:57:04.150681 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da8d8f9f30a6576ef3b63ec5f392f77d70f616bc32d763fc3d425cf0de901590\": container with ID starting with da8d8f9f30a6576ef3b63ec5f392f77d70f616bc32d763fc3d425cf0de901590 not found: ID does not exist" containerID="da8d8f9f30a6576ef3b63ec5f392f77d70f616bc32d763fc3d425cf0de901590" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.150730 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da8d8f9f30a6576ef3b63ec5f392f77d70f616bc32d763fc3d425cf0de901590"} err="failed to get container status \"da8d8f9f30a6576ef3b63ec5f392f77d70f616bc32d763fc3d425cf0de901590\": rpc error: code = NotFound desc = could not find container \"da8d8f9f30a6576ef3b63ec5f392f77d70f616bc32d763fc3d425cf0de901590\": container with ID starting with da8d8f9f30a6576ef3b63ec5f392f77d70f616bc32d763fc3d425cf0de901590 not found: ID does not exist" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.150753 4706 scope.go:117] "RemoveContainer" containerID="a67668dcfd526e20e133d248873ac04478998be08ad46cdeabc6b648780977cf" Nov 25 11:57:04 crc kubenswrapper[4706]: E1125 11:57:04.151573 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a67668dcfd526e20e133d248873ac04478998be08ad46cdeabc6b648780977cf\": container with ID starting with a67668dcfd526e20e133d248873ac04478998be08ad46cdeabc6b648780977cf not found: ID does not exist" containerID="a67668dcfd526e20e133d248873ac04478998be08ad46cdeabc6b648780977cf" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.151614 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a67668dcfd526e20e133d248873ac04478998be08ad46cdeabc6b648780977cf"} err="failed to get container status \"a67668dcfd526e20e133d248873ac04478998be08ad46cdeabc6b648780977cf\": rpc error: code = NotFound desc = could not find container \"a67668dcfd526e20e133d248873ac04478998be08ad46cdeabc6b648780977cf\": container with ID starting with a67668dcfd526e20e133d248873ac04478998be08ad46cdeabc6b648780977cf not found: ID does not exist" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.151630 4706 scope.go:117] "RemoveContainer" containerID="47a97f3bdaec06814548cc758ac0496c95daf22a53f74fb0ef28b454eb733c97" Nov 25 11:57:04 crc kubenswrapper[4706]: E1125 11:57:04.151825 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47a97f3bdaec06814548cc758ac0496c95daf22a53f74fb0ef28b454eb733c97\": container with ID starting with 47a97f3bdaec06814548cc758ac0496c95daf22a53f74fb0ef28b454eb733c97 not found: ID does not exist" containerID="47a97f3bdaec06814548cc758ac0496c95daf22a53f74fb0ef28b454eb733c97" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.151844 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47a97f3bdaec06814548cc758ac0496c95daf22a53f74fb0ef28b454eb733c97"} err="failed to get container status \"47a97f3bdaec06814548cc758ac0496c95daf22a53f74fb0ef28b454eb733c97\": rpc error: code = NotFound desc = could not find container \"47a97f3bdaec06814548cc758ac0496c95daf22a53f74fb0ef28b454eb733c97\": container with ID starting with 47a97f3bdaec06814548cc758ac0496c95daf22a53f74fb0ef28b454eb733c97 not found: ID does not exist" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.151873 4706 scope.go:117] "RemoveContainer" containerID="9939373b15134b1719de1987d50545c2cfa39a6d3e179e0ca908a425e5b68532" Nov 25 11:57:04 crc kubenswrapper[4706]: E1125 11:57:04.152202 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9939373b15134b1719de1987d50545c2cfa39a6d3e179e0ca908a425e5b68532\": container with ID starting with 9939373b15134b1719de1987d50545c2cfa39a6d3e179e0ca908a425e5b68532 not found: ID does not exist" containerID="9939373b15134b1719de1987d50545c2cfa39a6d3e179e0ca908a425e5b68532" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.152222 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9939373b15134b1719de1987d50545c2cfa39a6d3e179e0ca908a425e5b68532"} err="failed to get container status \"9939373b15134b1719de1987d50545c2cfa39a6d3e179e0ca908a425e5b68532\": rpc error: code = NotFound desc = could not find container \"9939373b15134b1719de1987d50545c2cfa39a6d3e179e0ca908a425e5b68532\": container with ID starting with 9939373b15134b1719de1987d50545c2cfa39a6d3e179e0ca908a425e5b68532 not found: ID does not exist" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.683278 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5d6465f55b-zdrth" podUID="74b33eb1-0020-4037-918c-9e747dcfd61f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.683723 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5d6465f55b-zdrth" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.806229 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.835585 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.844766 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.856580 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:57:04 crc kubenswrapper[4706]: E1125 11:57:04.857209 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4edea425-7eb5-458b-8e80-3e04fe787998" containerName="proxy-httpd" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.857324 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="4edea425-7eb5-458b-8e80-3e04fe787998" containerName="proxy-httpd" Nov 25 11:57:04 crc kubenswrapper[4706]: E1125 11:57:04.857431 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4edea425-7eb5-458b-8e80-3e04fe787998" containerName="ceilometer-notification-agent" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.857490 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="4edea425-7eb5-458b-8e80-3e04fe787998" containerName="ceilometer-notification-agent" Nov 25 11:57:04 crc kubenswrapper[4706]: E1125 11:57:04.857546 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4edea425-7eb5-458b-8e80-3e04fe787998" containerName="ceilometer-central-agent" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.860721 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="4edea425-7eb5-458b-8e80-3e04fe787998" containerName="ceilometer-central-agent" Nov 25 11:57:04 crc kubenswrapper[4706]: E1125 11:57:04.860833 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4edea425-7eb5-458b-8e80-3e04fe787998" containerName="sg-core" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.860915 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="4edea425-7eb5-458b-8e80-3e04fe787998" containerName="sg-core" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.861351 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="4edea425-7eb5-458b-8e80-3e04fe787998" containerName="sg-core" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.861479 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="4edea425-7eb5-458b-8e80-3e04fe787998" containerName="ceilometer-central-agent" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.861575 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="4edea425-7eb5-458b-8e80-3e04fe787998" containerName="proxy-httpd" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.861663 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="4edea425-7eb5-458b-8e80-3e04fe787998" containerName="ceilometer-notification-agent" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.865004 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.867802 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.869285 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.869505 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.923784 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqqb9\" (UniqueName: \"kubernetes.io/projected/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-kube-api-access-wqqb9\") pod \"ceilometer-0\" (UID: \"601bd00e-ad4b-4952-aa81-5dd731ac2ca9\") " pod="openstack/ceilometer-0" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.923879 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-config-data\") pod \"ceilometer-0\" (UID: \"601bd00e-ad4b-4952-aa81-5dd731ac2ca9\") " pod="openstack/ceilometer-0" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.923927 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"601bd00e-ad4b-4952-aa81-5dd731ac2ca9\") " pod="openstack/ceilometer-0" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.923948 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"601bd00e-ad4b-4952-aa81-5dd731ac2ca9\") " pod="openstack/ceilometer-0" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.924021 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-run-httpd\") pod \"ceilometer-0\" (UID: \"601bd00e-ad4b-4952-aa81-5dd731ac2ca9\") " pod="openstack/ceilometer-0" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.924066 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-scripts\") pod \"ceilometer-0\" (UID: \"601bd00e-ad4b-4952-aa81-5dd731ac2ca9\") " pod="openstack/ceilometer-0" Nov 25 11:57:04 crc kubenswrapper[4706]: I1125 11:57:04.924121 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-log-httpd\") pod \"ceilometer-0\" (UID: \"601bd00e-ad4b-4952-aa81-5dd731ac2ca9\") " pod="openstack/ceilometer-0" Nov 25 11:57:05 crc kubenswrapper[4706]: I1125 11:57:05.025683 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqqb9\" (UniqueName: \"kubernetes.io/projected/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-kube-api-access-wqqb9\") pod \"ceilometer-0\" (UID: \"601bd00e-ad4b-4952-aa81-5dd731ac2ca9\") " pod="openstack/ceilometer-0" Nov 25 11:57:05 crc kubenswrapper[4706]: I1125 11:57:05.025832 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-config-data\") pod \"ceilometer-0\" (UID: \"601bd00e-ad4b-4952-aa81-5dd731ac2ca9\") " pod="openstack/ceilometer-0" Nov 25 11:57:05 crc kubenswrapper[4706]: I1125 11:57:05.025861 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"601bd00e-ad4b-4952-aa81-5dd731ac2ca9\") " pod="openstack/ceilometer-0" Nov 25 11:57:05 crc kubenswrapper[4706]: I1125 11:57:05.025881 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"601bd00e-ad4b-4952-aa81-5dd731ac2ca9\") " pod="openstack/ceilometer-0" Nov 25 11:57:05 crc kubenswrapper[4706]: I1125 11:57:05.025922 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-run-httpd\") pod \"ceilometer-0\" (UID: \"601bd00e-ad4b-4952-aa81-5dd731ac2ca9\") " pod="openstack/ceilometer-0" Nov 25 11:57:05 crc kubenswrapper[4706]: I1125 11:57:05.025943 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-scripts\") pod \"ceilometer-0\" (UID: \"601bd00e-ad4b-4952-aa81-5dd731ac2ca9\") " pod="openstack/ceilometer-0" Nov 25 11:57:05 crc kubenswrapper[4706]: I1125 11:57:05.025977 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-log-httpd\") pod \"ceilometer-0\" (UID: \"601bd00e-ad4b-4952-aa81-5dd731ac2ca9\") " pod="openstack/ceilometer-0" Nov 25 11:57:05 crc kubenswrapper[4706]: I1125 11:57:05.026491 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-log-httpd\") pod \"ceilometer-0\" (UID: \"601bd00e-ad4b-4952-aa81-5dd731ac2ca9\") " pod="openstack/ceilometer-0" Nov 25 11:57:05 crc kubenswrapper[4706]: I1125 11:57:05.026716 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-run-httpd\") pod \"ceilometer-0\" (UID: \"601bd00e-ad4b-4952-aa81-5dd731ac2ca9\") " pod="openstack/ceilometer-0" Nov 25 11:57:05 crc kubenswrapper[4706]: I1125 11:57:05.031180 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-config-data\") pod \"ceilometer-0\" (UID: \"601bd00e-ad4b-4952-aa81-5dd731ac2ca9\") " pod="openstack/ceilometer-0" Nov 25 11:57:05 crc kubenswrapper[4706]: I1125 11:57:05.031444 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"601bd00e-ad4b-4952-aa81-5dd731ac2ca9\") " pod="openstack/ceilometer-0" Nov 25 11:57:05 crc kubenswrapper[4706]: I1125 11:57:05.035040 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"601bd00e-ad4b-4952-aa81-5dd731ac2ca9\") " pod="openstack/ceilometer-0" Nov 25 11:57:05 crc kubenswrapper[4706]: I1125 11:57:05.038228 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-scripts\") pod \"ceilometer-0\" (UID: \"601bd00e-ad4b-4952-aa81-5dd731ac2ca9\") " pod="openstack/ceilometer-0" Nov 25 11:57:05 crc kubenswrapper[4706]: I1125 11:57:05.043997 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqqb9\" (UniqueName: \"kubernetes.io/projected/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-kube-api-access-wqqb9\") pod \"ceilometer-0\" (UID: \"601bd00e-ad4b-4952-aa81-5dd731ac2ca9\") " pod="openstack/ceilometer-0" Nov 25 11:57:05 crc kubenswrapper[4706]: I1125 11:57:05.198410 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 11:57:05 crc kubenswrapper[4706]: I1125 11:57:05.937030 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4edea425-7eb5-458b-8e80-3e04fe787998" path="/var/lib/kubelet/pods/4edea425-7eb5-458b-8e80-3e04fe787998/volumes" Nov 25 11:57:06 crc kubenswrapper[4706]: I1125 11:57:06.734178 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:57:09 crc kubenswrapper[4706]: I1125 11:57:09.321432 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-65d9589979-xw964" Nov 25 11:57:09 crc kubenswrapper[4706]: I1125 11:57:09.325707 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-65d9589979-xw964" Nov 25 11:57:09 crc kubenswrapper[4706]: I1125 11:57:09.868229 4706 generic.go:334] "Generic (PLEG): container finished" podID="60e3d8af-641e-4c2c-b105-3d1b4b98904f" containerID="98d8b014a535b17e29ca946fbcc980dcf786569a83ca1d31e699b4f7a9197dae" exitCode=137 Nov 25 11:57:09 crc kubenswrapper[4706]: I1125 11:57:09.869313 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"60e3d8af-641e-4c2c-b105-3d1b4b98904f","Type":"ContainerDied","Data":"98d8b014a535b17e29ca946fbcc980dcf786569a83ca1d31e699b4f7a9197dae"} Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.255567 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7964f7f8cc-7zjzw" Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.376716 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-779dc76bb8-fwppw"] Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.376992 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-779dc76bb8-fwppw" podUID="6d2de783-5f62-4740-87d8-cef1b4941953" containerName="neutron-api" containerID="cri-o://90ed7f1fe46c3e584ef27ec512a9e5f7978715acab3cc385b2aa03d78bbad7f5" gracePeriod=30 Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.377496 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-779dc76bb8-fwppw" podUID="6d2de783-5f62-4740-87d8-cef1b4941953" containerName="neutron-httpd" containerID="cri-o://02b48970b5c92dfb6a9103f7137e53df7dd178574e3611a855155f1b079a9a9e" gracePeriod=30 Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.533170 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.539704 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/60e3d8af-641e-4c2c-b105-3d1b4b98904f-etc-machine-id\") pod \"60e3d8af-641e-4c2c-b105-3d1b4b98904f\" (UID: \"60e3d8af-641e-4c2c-b105-3d1b4b98904f\") " Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.539758 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60e3d8af-641e-4c2c-b105-3d1b4b98904f-config-data\") pod \"60e3d8af-641e-4c2c-b105-3d1b4b98904f\" (UID: \"60e3d8af-641e-4c2c-b105-3d1b4b98904f\") " Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.539793 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/60e3d8af-641e-4c2c-b105-3d1b4b98904f-config-data-custom\") pod \"60e3d8af-641e-4c2c-b105-3d1b4b98904f\" (UID: \"60e3d8af-641e-4c2c-b105-3d1b4b98904f\") " Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.539894 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60e3d8af-641e-4c2c-b105-3d1b4b98904f-logs\") pod \"60e3d8af-641e-4c2c-b105-3d1b4b98904f\" (UID: \"60e3d8af-641e-4c2c-b105-3d1b4b98904f\") " Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.539937 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60e3d8af-641e-4c2c-b105-3d1b4b98904f-scripts\") pod \"60e3d8af-641e-4c2c-b105-3d1b4b98904f\" (UID: \"60e3d8af-641e-4c2c-b105-3d1b4b98904f\") " Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.540005 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9h4r\" (UniqueName: \"kubernetes.io/projected/60e3d8af-641e-4c2c-b105-3d1b4b98904f-kube-api-access-q9h4r\") pod \"60e3d8af-641e-4c2c-b105-3d1b4b98904f\" (UID: \"60e3d8af-641e-4c2c-b105-3d1b4b98904f\") " Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.540038 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60e3d8af-641e-4c2c-b105-3d1b4b98904f-combined-ca-bundle\") pod \"60e3d8af-641e-4c2c-b105-3d1b4b98904f\" (UID: \"60e3d8af-641e-4c2c-b105-3d1b4b98904f\") " Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.540982 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60e3d8af-641e-4c2c-b105-3d1b4b98904f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "60e3d8af-641e-4c2c-b105-3d1b4b98904f" (UID: "60e3d8af-641e-4c2c-b105-3d1b4b98904f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.541567 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60e3d8af-641e-4c2c-b105-3d1b4b98904f-logs" (OuterVolumeSpecName: "logs") pod "60e3d8af-641e-4c2c-b105-3d1b4b98904f" (UID: "60e3d8af-641e-4c2c-b105-3d1b4b98904f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.546279 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60e3d8af-641e-4c2c-b105-3d1b4b98904f-scripts" (OuterVolumeSpecName: "scripts") pod "60e3d8af-641e-4c2c-b105-3d1b4b98904f" (UID: "60e3d8af-641e-4c2c-b105-3d1b4b98904f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.557464 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60e3d8af-641e-4c2c-b105-3d1b4b98904f-kube-api-access-q9h4r" (OuterVolumeSpecName: "kube-api-access-q9h4r") pod "60e3d8af-641e-4c2c-b105-3d1b4b98904f" (UID: "60e3d8af-641e-4c2c-b105-3d1b4b98904f"). InnerVolumeSpecName "kube-api-access-q9h4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.566221 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60e3d8af-641e-4c2c-b105-3d1b4b98904f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "60e3d8af-641e-4c2c-b105-3d1b4b98904f" (UID: "60e3d8af-641e-4c2c-b105-3d1b4b98904f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.610480 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60e3d8af-641e-4c2c-b105-3d1b4b98904f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60e3d8af-641e-4c2c-b105-3d1b4b98904f" (UID: "60e3d8af-641e-4c2c-b105-3d1b4b98904f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.642417 4706 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60e3d8af-641e-4c2c-b105-3d1b4b98904f-logs\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.642452 4706 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60e3d8af-641e-4c2c-b105-3d1b4b98904f-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.642461 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9h4r\" (UniqueName: \"kubernetes.io/projected/60e3d8af-641e-4c2c-b105-3d1b4b98904f-kube-api-access-q9h4r\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.642470 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60e3d8af-641e-4c2c-b105-3d1b4b98904f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.642478 4706 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/60e3d8af-641e-4c2c-b105-3d1b4b98904f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.642486 4706 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/60e3d8af-641e-4c2c-b105-3d1b4b98904f-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.647541 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60e3d8af-641e-4c2c-b105-3d1b4b98904f-config-data" (OuterVolumeSpecName: "config-data") pod "60e3d8af-641e-4c2c-b105-3d1b4b98904f" (UID: "60e3d8af-641e-4c2c-b105-3d1b4b98904f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.752817 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60e3d8af-641e-4c2c-b105-3d1b4b98904f-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.798454 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:57:10 crc kubenswrapper[4706]: W1125 11:57:10.812541 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod601bd00e_ad4b_4952_aa81_5dd731ac2ca9.slice/crio-b51bb13bfa7a754bfc8c98946382ebc3ee3c10c764054a96baf0404194151d1f WatchSource:0}: Error finding container b51bb13bfa7a754bfc8c98946382ebc3ee3c10c764054a96baf0404194151d1f: Status 404 returned error can't find the container with id b51bb13bfa7a754bfc8c98946382ebc3ee3c10c764054a96baf0404194151d1f Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.897240 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"601bd00e-ad4b-4952-aa81-5dd731ac2ca9","Type":"ContainerStarted","Data":"b51bb13bfa7a754bfc8c98946382ebc3ee3c10c764054a96baf0404194151d1f"} Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.908905 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"60e3d8af-641e-4c2c-b105-3d1b4b98904f","Type":"ContainerDied","Data":"90de65fd65ef7e1f3a095d1a78255ea80bdd49236032c6a6be6f79b440ec2c55"} Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.908967 4706 scope.go:117] "RemoveContainer" containerID="98d8b014a535b17e29ca946fbcc980dcf786569a83ca1d31e699b4f7a9197dae" Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.909013 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.915504 4706 generic.go:334] "Generic (PLEG): container finished" podID="6d2de783-5f62-4740-87d8-cef1b4941953" containerID="02b48970b5c92dfb6a9103f7137e53df7dd178574e3611a855155f1b079a9a9e" exitCode=0 Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.915553 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-779dc76bb8-fwppw" event={"ID":"6d2de783-5f62-4740-87d8-cef1b4941953","Type":"ContainerDied","Data":"02b48970b5c92dfb6a9103f7137e53df7dd178574e3611a855155f1b079a9a9e"} Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.937194 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.942442 4706 scope.go:117] "RemoveContainer" containerID="9172c3a5a4d92a4d142d21b37162e6f96520ff62c861e838243fbc680cab004a" Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.970838 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.984279 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 25 11:57:10 crc kubenswrapper[4706]: E1125 11:57:10.984751 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60e3d8af-641e-4c2c-b105-3d1b4b98904f" containerName="cinder-api-log" Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.984768 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="60e3d8af-641e-4c2c-b105-3d1b4b98904f" containerName="cinder-api-log" Nov 25 11:57:10 crc kubenswrapper[4706]: E1125 11:57:10.984787 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60e3d8af-641e-4c2c-b105-3d1b4b98904f" containerName="cinder-api" Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.984795 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="60e3d8af-641e-4c2c-b105-3d1b4b98904f" containerName="cinder-api" Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.984983 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="60e3d8af-641e-4c2c-b105-3d1b4b98904f" containerName="cinder-api" Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.984995 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="60e3d8af-641e-4c2c-b105-3d1b4b98904f" containerName="cinder-api-log" Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.992117 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.995415 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.995968 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 25 11:57:10 crc kubenswrapper[4706]: I1125 11:57:10.996186 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.013411 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.061187 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f35fbd6-a7c7-4d44-af30-601512a5dfa4-config-data\") pod \"cinder-api-0\" (UID: \"3f35fbd6-a7c7-4d44-af30-601512a5dfa4\") " pod="openstack/cinder-api-0" Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.061344 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f35fbd6-a7c7-4d44-af30-601512a5dfa4-scripts\") pod \"cinder-api-0\" (UID: \"3f35fbd6-a7c7-4d44-af30-601512a5dfa4\") " pod="openstack/cinder-api-0" Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.061549 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f35fbd6-a7c7-4d44-af30-601512a5dfa4-logs\") pod \"cinder-api-0\" (UID: \"3f35fbd6-a7c7-4d44-af30-601512a5dfa4\") " pod="openstack/cinder-api-0" Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.061592 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx86g\" (UniqueName: \"kubernetes.io/projected/3f35fbd6-a7c7-4d44-af30-601512a5dfa4-kube-api-access-xx86g\") pod \"cinder-api-0\" (UID: \"3f35fbd6-a7c7-4d44-af30-601512a5dfa4\") " pod="openstack/cinder-api-0" Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.061641 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f35fbd6-a7c7-4d44-af30-601512a5dfa4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3f35fbd6-a7c7-4d44-af30-601512a5dfa4\") " pod="openstack/cinder-api-0" Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.061667 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f35fbd6-a7c7-4d44-af30-601512a5dfa4-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3f35fbd6-a7c7-4d44-af30-601512a5dfa4\") " pod="openstack/cinder-api-0" Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.061711 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f35fbd6-a7c7-4d44-af30-601512a5dfa4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3f35fbd6-a7c7-4d44-af30-601512a5dfa4\") " pod="openstack/cinder-api-0" Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.061783 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f35fbd6-a7c7-4d44-af30-601512a5dfa4-config-data-custom\") pod \"cinder-api-0\" (UID: \"3f35fbd6-a7c7-4d44-af30-601512a5dfa4\") " pod="openstack/cinder-api-0" Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.061818 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f35fbd6-a7c7-4d44-af30-601512a5dfa4-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3f35fbd6-a7c7-4d44-af30-601512a5dfa4\") " pod="openstack/cinder-api-0" Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.166644 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f35fbd6-a7c7-4d44-af30-601512a5dfa4-logs\") pod \"cinder-api-0\" (UID: \"3f35fbd6-a7c7-4d44-af30-601512a5dfa4\") " pod="openstack/cinder-api-0" Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.166914 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx86g\" (UniqueName: \"kubernetes.io/projected/3f35fbd6-a7c7-4d44-af30-601512a5dfa4-kube-api-access-xx86g\") pod \"cinder-api-0\" (UID: \"3f35fbd6-a7c7-4d44-af30-601512a5dfa4\") " pod="openstack/cinder-api-0" Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.167044 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f35fbd6-a7c7-4d44-af30-601512a5dfa4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3f35fbd6-a7c7-4d44-af30-601512a5dfa4\") " pod="openstack/cinder-api-0" Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.167145 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f35fbd6-a7c7-4d44-af30-601512a5dfa4-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3f35fbd6-a7c7-4d44-af30-601512a5dfa4\") " pod="openstack/cinder-api-0" Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.167238 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f35fbd6-a7c7-4d44-af30-601512a5dfa4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3f35fbd6-a7c7-4d44-af30-601512a5dfa4\") " pod="openstack/cinder-api-0" Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.167356 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f35fbd6-a7c7-4d44-af30-601512a5dfa4-config-data-custom\") pod \"cinder-api-0\" (UID: \"3f35fbd6-a7c7-4d44-af30-601512a5dfa4\") " pod="openstack/cinder-api-0" Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.167830 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f35fbd6-a7c7-4d44-af30-601512a5dfa4-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3f35fbd6-a7c7-4d44-af30-601512a5dfa4\") " pod="openstack/cinder-api-0" Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.167956 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f35fbd6-a7c7-4d44-af30-601512a5dfa4-config-data\") pod \"cinder-api-0\" (UID: \"3f35fbd6-a7c7-4d44-af30-601512a5dfa4\") " pod="openstack/cinder-api-0" Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.170837 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f35fbd6-a7c7-4d44-af30-601512a5dfa4-scripts\") pod \"cinder-api-0\" (UID: \"3f35fbd6-a7c7-4d44-af30-601512a5dfa4\") " pod="openstack/cinder-api-0" Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.167247 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f35fbd6-a7c7-4d44-af30-601512a5dfa4-logs\") pod \"cinder-api-0\" (UID: \"3f35fbd6-a7c7-4d44-af30-601512a5dfa4\") " pod="openstack/cinder-api-0" Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.167442 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f35fbd6-a7c7-4d44-af30-601512a5dfa4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3f35fbd6-a7c7-4d44-af30-601512a5dfa4\") " pod="openstack/cinder-api-0" Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.175757 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f35fbd6-a7c7-4d44-af30-601512a5dfa4-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3f35fbd6-a7c7-4d44-af30-601512a5dfa4\") " pod="openstack/cinder-api-0" Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.175812 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f35fbd6-a7c7-4d44-af30-601512a5dfa4-config-data-custom\") pod \"cinder-api-0\" (UID: \"3f35fbd6-a7c7-4d44-af30-601512a5dfa4\") " pod="openstack/cinder-api-0" Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.176901 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f35fbd6-a7c7-4d44-af30-601512a5dfa4-scripts\") pod \"cinder-api-0\" (UID: \"3f35fbd6-a7c7-4d44-af30-601512a5dfa4\") " pod="openstack/cinder-api-0" Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.186293 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f35fbd6-a7c7-4d44-af30-601512a5dfa4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3f35fbd6-a7c7-4d44-af30-601512a5dfa4\") " pod="openstack/cinder-api-0" Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.186776 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f35fbd6-a7c7-4d44-af30-601512a5dfa4-config-data\") pod \"cinder-api-0\" (UID: \"3f35fbd6-a7c7-4d44-af30-601512a5dfa4\") " pod="openstack/cinder-api-0" Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.194133 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f35fbd6-a7c7-4d44-af30-601512a5dfa4-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3f35fbd6-a7c7-4d44-af30-601512a5dfa4\") " pod="openstack/cinder-api-0" Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.194692 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx86g\" (UniqueName: \"kubernetes.io/projected/3f35fbd6-a7c7-4d44-af30-601512a5dfa4-kube-api-access-xx86g\") pod \"cinder-api-0\" (UID: \"3f35fbd6-a7c7-4d44-af30-601512a5dfa4\") " pod="openstack/cinder-api-0" Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.308098 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.795593 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.891563 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d6465f55b-zdrth" Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.950381 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60e3d8af-641e-4c2c-b105-3d1b4b98904f" path="/var/lib/kubelet/pods/60e3d8af-641e-4c2c-b105-3d1b4b98904f/volumes" Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.977404 4706 generic.go:334] "Generic (PLEG): container finished" podID="74b33eb1-0020-4037-918c-9e747dcfd61f" containerID="779cce40cf4cc4947bddf2063a31d045574d3997800d880ef7c40c01c42a4f70" exitCode=137 Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.977673 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d6465f55b-zdrth" Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.977680 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5d6465f55b-zdrth" event={"ID":"74b33eb1-0020-4037-918c-9e747dcfd61f","Type":"ContainerDied","Data":"779cce40cf4cc4947bddf2063a31d045574d3997800d880ef7c40c01c42a4f70"} Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.977916 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5d6465f55b-zdrth" event={"ID":"74b33eb1-0020-4037-918c-9e747dcfd61f","Type":"ContainerDied","Data":"a47feaa85e40c474876dd46428ea160b0a82ec7f94cc77f9a69dd0cfe0b98dcd"} Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.977983 4706 scope.go:117] "RemoveContainer" containerID="5f702a091e203894b9c68bd117079bc8a175269c6b226c33e9f95d472f2849bf" Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.986953 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/74b33eb1-0020-4037-918c-9e747dcfd61f-horizon-tls-certs\") pod \"74b33eb1-0020-4037-918c-9e747dcfd61f\" (UID: \"74b33eb1-0020-4037-918c-9e747dcfd61f\") " Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.987218 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74b33eb1-0020-4037-918c-9e747dcfd61f-horizon-secret-key\") pod \"74b33eb1-0020-4037-918c-9e747dcfd61f\" (UID: \"74b33eb1-0020-4037-918c-9e747dcfd61f\") " Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.987373 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74b33eb1-0020-4037-918c-9e747dcfd61f-logs\") pod \"74b33eb1-0020-4037-918c-9e747dcfd61f\" (UID: \"74b33eb1-0020-4037-918c-9e747dcfd61f\") " Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.987497 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74b33eb1-0020-4037-918c-9e747dcfd61f-config-data\") pod \"74b33eb1-0020-4037-918c-9e747dcfd61f\" (UID: \"74b33eb1-0020-4037-918c-9e747dcfd61f\") " Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.987671 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74b33eb1-0020-4037-918c-9e747dcfd61f-combined-ca-bundle\") pod \"74b33eb1-0020-4037-918c-9e747dcfd61f\" (UID: \"74b33eb1-0020-4037-918c-9e747dcfd61f\") " Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.987810 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74b33eb1-0020-4037-918c-9e747dcfd61f-scripts\") pod \"74b33eb1-0020-4037-918c-9e747dcfd61f\" (UID: \"74b33eb1-0020-4037-918c-9e747dcfd61f\") " Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.987983 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gp4v\" (UniqueName: \"kubernetes.io/projected/74b33eb1-0020-4037-918c-9e747dcfd61f-kube-api-access-2gp4v\") pod \"74b33eb1-0020-4037-918c-9e747dcfd61f\" (UID: \"74b33eb1-0020-4037-918c-9e747dcfd61f\") " Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.993481 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74b33eb1-0020-4037-918c-9e747dcfd61f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "74b33eb1-0020-4037-918c-9e747dcfd61f" (UID: "74b33eb1-0020-4037-918c-9e747dcfd61f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.993683 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74b33eb1-0020-4037-918c-9e747dcfd61f-logs" (OuterVolumeSpecName: "logs") pod "74b33eb1-0020-4037-918c-9e747dcfd61f" (UID: "74b33eb1-0020-4037-918c-9e747dcfd61f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.994000 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"601bd00e-ad4b-4952-aa81-5dd731ac2ca9","Type":"ContainerStarted","Data":"d251c706b762c92dcb8e2ba62471e7b54ae10947ac1468c5131412316ce5fcd4"} Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.997877 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3f35fbd6-a7c7-4d44-af30-601512a5dfa4","Type":"ContainerStarted","Data":"878a71f6251dd6cdae36b5b7c3a55fb5bbdd9640bed090be26172031e4aff47d"} Nov 25 11:57:11 crc kubenswrapper[4706]: I1125 11:57:11.999750 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74b33eb1-0020-4037-918c-9e747dcfd61f-kube-api-access-2gp4v" (OuterVolumeSpecName: "kube-api-access-2gp4v") pod "74b33eb1-0020-4037-918c-9e747dcfd61f" (UID: "74b33eb1-0020-4037-918c-9e747dcfd61f"). InnerVolumeSpecName "kube-api-access-2gp4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:57:12 crc kubenswrapper[4706]: I1125 11:57:12.009332 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b8a85f10-0dcd-42f8-a4bc-f0b25f59cfe8","Type":"ContainerStarted","Data":"eb9a3ddfccd6487bbdd6cfc1c95c04b3c643734c6e314a5dc203a731450ffdde"} Nov 25 11:57:12 crc kubenswrapper[4706]: I1125 11:57:12.030121 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74b33eb1-0020-4037-918c-9e747dcfd61f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74b33eb1-0020-4037-918c-9e747dcfd61f" (UID: "74b33eb1-0020-4037-918c-9e747dcfd61f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:57:12 crc kubenswrapper[4706]: I1125 11:57:12.041754 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.539547251 podStartE2EDuration="15.041733783s" podCreationTimestamp="2025-11-25 11:56:57 +0000 UTC" firstStartedPulling="2025-11-25 11:56:58.760566586 +0000 UTC m=+1227.675123967" lastFinishedPulling="2025-11-25 11:57:10.262753118 +0000 UTC m=+1239.177310499" observedRunningTime="2025-11-25 11:57:12.030984102 +0000 UTC m=+1240.945541483" watchObservedRunningTime="2025-11-25 11:57:12.041733783 +0000 UTC m=+1240.956291154" Nov 25 11:57:12 crc kubenswrapper[4706]: I1125 11:57:12.059591 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74b33eb1-0020-4037-918c-9e747dcfd61f-config-data" (OuterVolumeSpecName: "config-data") pod "74b33eb1-0020-4037-918c-9e747dcfd61f" (UID: "74b33eb1-0020-4037-918c-9e747dcfd61f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:57:12 crc kubenswrapper[4706]: I1125 11:57:12.077100 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74b33eb1-0020-4037-918c-9e747dcfd61f-scripts" (OuterVolumeSpecName: "scripts") pod "74b33eb1-0020-4037-918c-9e747dcfd61f" (UID: "74b33eb1-0020-4037-918c-9e747dcfd61f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:57:12 crc kubenswrapper[4706]: I1125 11:57:12.090436 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gp4v\" (UniqueName: \"kubernetes.io/projected/74b33eb1-0020-4037-918c-9e747dcfd61f-kube-api-access-2gp4v\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:12 crc kubenswrapper[4706]: I1125 11:57:12.090482 4706 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74b33eb1-0020-4037-918c-9e747dcfd61f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:12 crc kubenswrapper[4706]: I1125 11:57:12.090496 4706 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74b33eb1-0020-4037-918c-9e747dcfd61f-logs\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:12 crc kubenswrapper[4706]: I1125 11:57:12.090509 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74b33eb1-0020-4037-918c-9e747dcfd61f-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:12 crc kubenswrapper[4706]: I1125 11:57:12.090520 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74b33eb1-0020-4037-918c-9e747dcfd61f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:12 crc kubenswrapper[4706]: I1125 11:57:12.090531 4706 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74b33eb1-0020-4037-918c-9e747dcfd61f-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:12 crc kubenswrapper[4706]: I1125 11:57:12.092696 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74b33eb1-0020-4037-918c-9e747dcfd61f-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "74b33eb1-0020-4037-918c-9e747dcfd61f" (UID: "74b33eb1-0020-4037-918c-9e747dcfd61f"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:57:12 crc kubenswrapper[4706]: I1125 11:57:12.192334 4706 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/74b33eb1-0020-4037-918c-9e747dcfd61f-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:12 crc kubenswrapper[4706]: I1125 11:57:12.232704 4706 scope.go:117] "RemoveContainer" containerID="779cce40cf4cc4947bddf2063a31d045574d3997800d880ef7c40c01c42a4f70" Nov 25 11:57:12 crc kubenswrapper[4706]: I1125 11:57:12.311195 4706 scope.go:117] "RemoveContainer" containerID="5f702a091e203894b9c68bd117079bc8a175269c6b226c33e9f95d472f2849bf" Nov 25 11:57:12 crc kubenswrapper[4706]: E1125 11:57:12.314865 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f702a091e203894b9c68bd117079bc8a175269c6b226c33e9f95d472f2849bf\": container with ID starting with 5f702a091e203894b9c68bd117079bc8a175269c6b226c33e9f95d472f2849bf not found: ID does not exist" containerID="5f702a091e203894b9c68bd117079bc8a175269c6b226c33e9f95d472f2849bf" Nov 25 11:57:12 crc kubenswrapper[4706]: I1125 11:57:12.314909 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f702a091e203894b9c68bd117079bc8a175269c6b226c33e9f95d472f2849bf"} err="failed to get container status \"5f702a091e203894b9c68bd117079bc8a175269c6b226c33e9f95d472f2849bf\": rpc error: code = NotFound desc = could not find container \"5f702a091e203894b9c68bd117079bc8a175269c6b226c33e9f95d472f2849bf\": container with ID starting with 5f702a091e203894b9c68bd117079bc8a175269c6b226c33e9f95d472f2849bf not found: ID does not exist" Nov 25 11:57:12 crc kubenswrapper[4706]: I1125 11:57:12.314938 4706 scope.go:117] "RemoveContainer" containerID="779cce40cf4cc4947bddf2063a31d045574d3997800d880ef7c40c01c42a4f70" Nov 25 11:57:12 crc kubenswrapper[4706]: E1125 11:57:12.315409 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"779cce40cf4cc4947bddf2063a31d045574d3997800d880ef7c40c01c42a4f70\": container with ID starting with 779cce40cf4cc4947bddf2063a31d045574d3997800d880ef7c40c01c42a4f70 not found: ID does not exist" containerID="779cce40cf4cc4947bddf2063a31d045574d3997800d880ef7c40c01c42a4f70" Nov 25 11:57:12 crc kubenswrapper[4706]: I1125 11:57:12.315457 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"779cce40cf4cc4947bddf2063a31d045574d3997800d880ef7c40c01c42a4f70"} err="failed to get container status \"779cce40cf4cc4947bddf2063a31d045574d3997800d880ef7c40c01c42a4f70\": rpc error: code = NotFound desc = could not find container \"779cce40cf4cc4947bddf2063a31d045574d3997800d880ef7c40c01c42a4f70\": container with ID starting with 779cce40cf4cc4947bddf2063a31d045574d3997800d880ef7c40c01c42a4f70 not found: ID does not exist" Nov 25 11:57:12 crc kubenswrapper[4706]: I1125 11:57:12.321131 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5d6465f55b-zdrth"] Nov 25 11:57:12 crc kubenswrapper[4706]: I1125 11:57:12.331601 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5d6465f55b-zdrth"] Nov 25 11:57:13 crc kubenswrapper[4706]: I1125 11:57:13.026203 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3f35fbd6-a7c7-4d44-af30-601512a5dfa4","Type":"ContainerStarted","Data":"3a532cc615605728c08b57668f0f177276a01ede8cc642afb405e9f0e1f5f50b"} Nov 25 11:57:13 crc kubenswrapper[4706]: I1125 11:57:13.939594 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74b33eb1-0020-4037-918c-9e747dcfd61f" path="/var/lib/kubelet/pods/74b33eb1-0020-4037-918c-9e747dcfd61f/volumes" Nov 25 11:57:14 crc kubenswrapper[4706]: I1125 11:57:14.035573 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"601bd00e-ad4b-4952-aa81-5dd731ac2ca9","Type":"ContainerStarted","Data":"651d6534dd298a0cff064a96f5d62a052c27d714416ac3950dcbf5499b5da76b"} Nov 25 11:57:14 crc kubenswrapper[4706]: I1125 11:57:14.037210 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3f35fbd6-a7c7-4d44-af30-601512a5dfa4","Type":"ContainerStarted","Data":"b7417fe79c3d57aef9d465ad5cf7a6d869fd7facca547fb66b106e0b2c64414d"} Nov 25 11:57:14 crc kubenswrapper[4706]: I1125 11:57:14.037364 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 25 11:57:14 crc kubenswrapper[4706]: I1125 11:57:14.076664 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.076645864 podStartE2EDuration="4.076645864s" podCreationTimestamp="2025-11-25 11:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:57:14.066683493 +0000 UTC m=+1242.981240874" watchObservedRunningTime="2025-11-25 11:57:14.076645864 +0000 UTC m=+1242.991203245" Nov 25 11:57:15 crc kubenswrapper[4706]: I1125 11:57:15.048246 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"601bd00e-ad4b-4952-aa81-5dd731ac2ca9","Type":"ContainerStarted","Data":"5bcd86547380f09e59549796b1acc7b613fa143098c098f4462546839652fcef"} Nov 25 11:57:17 crc kubenswrapper[4706]: I1125 11:57:17.075273 4706 generic.go:334] "Generic (PLEG): container finished" podID="6d2de783-5f62-4740-87d8-cef1b4941953" containerID="90ed7f1fe46c3e584ef27ec512a9e5f7978715acab3cc385b2aa03d78bbad7f5" exitCode=0 Nov 25 11:57:17 crc kubenswrapper[4706]: I1125 11:57:17.075378 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-779dc76bb8-fwppw" event={"ID":"6d2de783-5f62-4740-87d8-cef1b4941953","Type":"ContainerDied","Data":"90ed7f1fe46c3e584ef27ec512a9e5f7978715acab3cc385b2aa03d78bbad7f5"} Nov 25 11:57:17 crc kubenswrapper[4706]: I1125 11:57:17.752051 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-779dc76bb8-fwppw" Nov 25 11:57:17 crc kubenswrapper[4706]: I1125 11:57:17.888516 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d2de783-5f62-4740-87d8-cef1b4941953-combined-ca-bundle\") pod \"6d2de783-5f62-4740-87d8-cef1b4941953\" (UID: \"6d2de783-5f62-4740-87d8-cef1b4941953\") " Nov 25 11:57:17 crc kubenswrapper[4706]: I1125 11:57:17.888596 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4b4x8\" (UniqueName: \"kubernetes.io/projected/6d2de783-5f62-4740-87d8-cef1b4941953-kube-api-access-4b4x8\") pod \"6d2de783-5f62-4740-87d8-cef1b4941953\" (UID: \"6d2de783-5f62-4740-87d8-cef1b4941953\") " Nov 25 11:57:17 crc kubenswrapper[4706]: I1125 11:57:17.888915 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6d2de783-5f62-4740-87d8-cef1b4941953-httpd-config\") pod \"6d2de783-5f62-4740-87d8-cef1b4941953\" (UID: \"6d2de783-5f62-4740-87d8-cef1b4941953\") " Nov 25 11:57:17 crc kubenswrapper[4706]: I1125 11:57:17.889027 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d2de783-5f62-4740-87d8-cef1b4941953-ovndb-tls-certs\") pod \"6d2de783-5f62-4740-87d8-cef1b4941953\" (UID: \"6d2de783-5f62-4740-87d8-cef1b4941953\") " Nov 25 11:57:17 crc kubenswrapper[4706]: I1125 11:57:17.889092 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6d2de783-5f62-4740-87d8-cef1b4941953-config\") pod \"6d2de783-5f62-4740-87d8-cef1b4941953\" (UID: \"6d2de783-5f62-4740-87d8-cef1b4941953\") " Nov 25 11:57:17 crc kubenswrapper[4706]: I1125 11:57:17.896863 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d2de783-5f62-4740-87d8-cef1b4941953-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "6d2de783-5f62-4740-87d8-cef1b4941953" (UID: "6d2de783-5f62-4740-87d8-cef1b4941953"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:57:17 crc kubenswrapper[4706]: I1125 11:57:17.911244 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d2de783-5f62-4740-87d8-cef1b4941953-kube-api-access-4b4x8" (OuterVolumeSpecName: "kube-api-access-4b4x8") pod "6d2de783-5f62-4740-87d8-cef1b4941953" (UID: "6d2de783-5f62-4740-87d8-cef1b4941953"). InnerVolumeSpecName "kube-api-access-4b4x8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:57:17 crc kubenswrapper[4706]: I1125 11:57:17.942742 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d2de783-5f62-4740-87d8-cef1b4941953-config" (OuterVolumeSpecName: "config") pod "6d2de783-5f62-4740-87d8-cef1b4941953" (UID: "6d2de783-5f62-4740-87d8-cef1b4941953"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:57:17 crc kubenswrapper[4706]: I1125 11:57:17.942917 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d2de783-5f62-4740-87d8-cef1b4941953-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d2de783-5f62-4740-87d8-cef1b4941953" (UID: "6d2de783-5f62-4740-87d8-cef1b4941953"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:57:17 crc kubenswrapper[4706]: I1125 11:57:17.986166 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d2de783-5f62-4740-87d8-cef1b4941953-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "6d2de783-5f62-4740-87d8-cef1b4941953" (UID: "6d2de783-5f62-4740-87d8-cef1b4941953"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:57:17 crc kubenswrapper[4706]: I1125 11:57:17.991265 4706 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6d2de783-5f62-4740-87d8-cef1b4941953-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:17 crc kubenswrapper[4706]: I1125 11:57:17.991329 4706 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d2de783-5f62-4740-87d8-cef1b4941953-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:17 crc kubenswrapper[4706]: I1125 11:57:17.991341 4706 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/6d2de783-5f62-4740-87d8-cef1b4941953-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:17 crc kubenswrapper[4706]: I1125 11:57:17.991350 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d2de783-5f62-4740-87d8-cef1b4941953-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:17 crc kubenswrapper[4706]: I1125 11:57:17.991361 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4b4x8\" (UniqueName: \"kubernetes.io/projected/6d2de783-5f62-4740-87d8-cef1b4941953-kube-api-access-4b4x8\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:18 crc kubenswrapper[4706]: I1125 11:57:18.097496 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-779dc76bb8-fwppw" event={"ID":"6d2de783-5f62-4740-87d8-cef1b4941953","Type":"ContainerDied","Data":"f016b2f47a82468b9ae3115f6bcfea425e1d701710857e6fc3451b60b8096f52"} Nov 25 11:57:18 crc kubenswrapper[4706]: I1125 11:57:18.097550 4706 scope.go:117] "RemoveContainer" containerID="02b48970b5c92dfb6a9103f7137e53df7dd178574e3611a855155f1b079a9a9e" Nov 25 11:57:18 crc kubenswrapper[4706]: I1125 11:57:18.097703 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-779dc76bb8-fwppw" Nov 25 11:57:18 crc kubenswrapper[4706]: I1125 11:57:18.134744 4706 scope.go:117] "RemoveContainer" containerID="90ed7f1fe46c3e584ef27ec512a9e5f7978715acab3cc385b2aa03d78bbad7f5" Nov 25 11:57:18 crc kubenswrapper[4706]: I1125 11:57:18.148586 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-779dc76bb8-fwppw"] Nov 25 11:57:18 crc kubenswrapper[4706]: I1125 11:57:18.163744 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-779dc76bb8-fwppw"] Nov 25 11:57:19 crc kubenswrapper[4706]: I1125 11:57:19.938447 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d2de783-5f62-4740-87d8-cef1b4941953" path="/var/lib/kubelet/pods/6d2de783-5f62-4740-87d8-cef1b4941953/volumes" Nov 25 11:57:19 crc kubenswrapper[4706]: I1125 11:57:19.975213 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 11:57:19 crc kubenswrapper[4706]: I1125 11:57:19.975507 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f" containerName="glance-log" containerID="cri-o://b5e1097ae896ce3cc97fa565106e38e6095eb00fc75f3d3d729b4dea2824be11" gracePeriod=30 Nov 25 11:57:19 crc kubenswrapper[4706]: I1125 11:57:19.975988 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f" containerName="glance-httpd" containerID="cri-o://cea2a1a48ebbafa7abdc43558125cc84b06d937577b4fc75c50451664c420801" gracePeriod=30 Nov 25 11:57:20 crc kubenswrapper[4706]: I1125 11:57:20.125262 4706 generic.go:334] "Generic (PLEG): container finished" podID="6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f" containerID="b5e1097ae896ce3cc97fa565106e38e6095eb00fc75f3d3d729b4dea2824be11" exitCode=143 Nov 25 11:57:20 crc kubenswrapper[4706]: I1125 11:57:20.125769 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f","Type":"ContainerDied","Data":"b5e1097ae896ce3cc97fa565106e38e6095eb00fc75f3d3d729b4dea2824be11"} Nov 25 11:57:20 crc kubenswrapper[4706]: E1125 11:57:20.334273 4706 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fb9e8f3_e03d_40bd_ba5c_8ce7715af21f.slice/crio-b5e1097ae896ce3cc97fa565106e38e6095eb00fc75f3d3d729b4dea2824be11.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fb9e8f3_e03d_40bd_ba5c_8ce7715af21f.slice/crio-conmon-b5e1097ae896ce3cc97fa565106e38e6095eb00fc75f3d3d729b4dea2824be11.scope\": RecentStats: unable to find data in memory cache]" Nov 25 11:57:20 crc kubenswrapper[4706]: I1125 11:57:20.855893 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 11:57:20 crc kubenswrapper[4706]: I1125 11:57:20.856433 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="9392449e-c392-4d77-b36a-67b6d8c716c7" containerName="glance-log" containerID="cri-o://17525079762a657aaaa7ddedbe78c41ea63e1654951381a5ee6b864ec29cb169" gracePeriod=30 Nov 25 11:57:20 crc kubenswrapper[4706]: I1125 11:57:20.856519 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="9392449e-c392-4d77-b36a-67b6d8c716c7" containerName="glance-httpd" containerID="cri-o://ad219d52a5cb7380348da742495450a2737dd6d4946c87d7529be684c28d8619" gracePeriod=30 Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.137965 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"601bd00e-ad4b-4952-aa81-5dd731ac2ca9","Type":"ContainerStarted","Data":"d7ce006fb12802230fb8535e664ca122b9a00b6e7c50ef2c6747512e7f75a1f6"} Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.138147 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="601bd00e-ad4b-4952-aa81-5dd731ac2ca9" containerName="ceilometer-central-agent" containerID="cri-o://d251c706b762c92dcb8e2ba62471e7b54ae10947ac1468c5131412316ce5fcd4" gracePeriod=30 Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.138162 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="601bd00e-ad4b-4952-aa81-5dd731ac2ca9" containerName="sg-core" containerID="cri-o://5bcd86547380f09e59549796b1acc7b613fa143098c098f4462546839652fcef" gracePeriod=30 Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.138219 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="601bd00e-ad4b-4952-aa81-5dd731ac2ca9" containerName="ceilometer-notification-agent" containerID="cri-o://651d6534dd298a0cff064a96f5d62a052c27d714416ac3950dcbf5499b5da76b" gracePeriod=30 Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.138162 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="601bd00e-ad4b-4952-aa81-5dd731ac2ca9" containerName="proxy-httpd" containerID="cri-o://d7ce006fb12802230fb8535e664ca122b9a00b6e7c50ef2c6747512e7f75a1f6" gracePeriod=30 Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.138164 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.140503 4706 generic.go:334] "Generic (PLEG): container finished" podID="9392449e-c392-4d77-b36a-67b6d8c716c7" containerID="17525079762a657aaaa7ddedbe78c41ea63e1654951381a5ee6b864ec29cb169" exitCode=143 Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.140542 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9392449e-c392-4d77-b36a-67b6d8c716c7","Type":"ContainerDied","Data":"17525079762a657aaaa7ddedbe78c41ea63e1654951381a5ee6b864ec29cb169"} Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.171279 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=7.99002941 podStartE2EDuration="17.171256351s" podCreationTimestamp="2025-11-25 11:57:04 +0000 UTC" firstStartedPulling="2025-11-25 11:57:10.814500243 +0000 UTC m=+1239.729057624" lastFinishedPulling="2025-11-25 11:57:19.995727184 +0000 UTC m=+1248.910284565" observedRunningTime="2025-11-25 11:57:21.163489306 +0000 UTC m=+1250.078046697" watchObservedRunningTime="2025-11-25 11:57:21.171256351 +0000 UTC m=+1250.085813742" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.574818 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-c6da-account-create-p9tnk"] Nov 25 11:57:21 crc kubenswrapper[4706]: E1125 11:57:21.575468 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d2de783-5f62-4740-87d8-cef1b4941953" containerName="neutron-httpd" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.575483 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d2de783-5f62-4740-87d8-cef1b4941953" containerName="neutron-httpd" Nov 25 11:57:21 crc kubenswrapper[4706]: E1125 11:57:21.575504 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74b33eb1-0020-4037-918c-9e747dcfd61f" containerName="horizon-log" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.575510 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="74b33eb1-0020-4037-918c-9e747dcfd61f" containerName="horizon-log" Nov 25 11:57:21 crc kubenswrapper[4706]: E1125 11:57:21.575536 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d2de783-5f62-4740-87d8-cef1b4941953" containerName="neutron-api" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.575542 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d2de783-5f62-4740-87d8-cef1b4941953" containerName="neutron-api" Nov 25 11:57:21 crc kubenswrapper[4706]: E1125 11:57:21.575551 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74b33eb1-0020-4037-918c-9e747dcfd61f" containerName="horizon" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.575556 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="74b33eb1-0020-4037-918c-9e747dcfd61f" containerName="horizon" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.575717 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="74b33eb1-0020-4037-918c-9e747dcfd61f" containerName="horizon-log" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.575734 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d2de783-5f62-4740-87d8-cef1b4941953" containerName="neutron-httpd" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.575743 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d2de783-5f62-4740-87d8-cef1b4941953" containerName="neutron-api" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.575759 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="74b33eb1-0020-4037-918c-9e747dcfd61f" containerName="horizon" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.576294 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c6da-account-create-p9tnk" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.578599 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.595491 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-ctmr9"] Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.596887 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ctmr9" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.600842 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c6da-account-create-p9tnk"] Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.613345 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-ctmr9"] Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.776620 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64c51a3f-220f-4d41-a8ae-996c5d65da6a-operator-scripts\") pod \"nova-api-c6da-account-create-p9tnk\" (UID: \"64c51a3f-220f-4d41-a8ae-996c5d65da6a\") " pod="openstack/nova-api-c6da-account-create-p9tnk" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.776727 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chxj2\" (UniqueName: \"kubernetes.io/projected/ed5f6b7c-b239-4aba-8c85-0ffdd29622da-kube-api-access-chxj2\") pod \"nova-api-db-create-ctmr9\" (UID: \"ed5f6b7c-b239-4aba-8c85-0ffdd29622da\") " pod="openstack/nova-api-db-create-ctmr9" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.776763 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqvgw\" (UniqueName: \"kubernetes.io/projected/64c51a3f-220f-4d41-a8ae-996c5d65da6a-kube-api-access-lqvgw\") pod \"nova-api-c6da-account-create-p9tnk\" (UID: \"64c51a3f-220f-4d41-a8ae-996c5d65da6a\") " pod="openstack/nova-api-c6da-account-create-p9tnk" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.776815 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed5f6b7c-b239-4aba-8c85-0ffdd29622da-operator-scripts\") pod \"nova-api-db-create-ctmr9\" (UID: \"ed5f6b7c-b239-4aba-8c85-0ffdd29622da\") " pod="openstack/nova-api-db-create-ctmr9" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.779409 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-j8qcn"] Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.780521 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-j8qcn" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.817407 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-j8qcn"] Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.830385 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-7393-account-create-9cnk4"] Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.832007 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7393-account-create-9cnk4" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.839729 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.856555 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-p4np9"] Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.858870 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-p4np9" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.878107 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64c51a3f-220f-4d41-a8ae-996c5d65da6a-operator-scripts\") pod \"nova-api-c6da-account-create-p9tnk\" (UID: \"64c51a3f-220f-4d41-a8ae-996c5d65da6a\") " pod="openstack/nova-api-c6da-account-create-p9tnk" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.878191 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c65sg\" (UniqueName: \"kubernetes.io/projected/5cf51224-9407-44c8-805f-fcf18fa531a3-kube-api-access-c65sg\") pod \"nova-cell0-db-create-j8qcn\" (UID: \"5cf51224-9407-44c8-805f-fcf18fa531a3\") " pod="openstack/nova-cell0-db-create-j8qcn" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.878263 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chxj2\" (UniqueName: \"kubernetes.io/projected/ed5f6b7c-b239-4aba-8c85-0ffdd29622da-kube-api-access-chxj2\") pod \"nova-api-db-create-ctmr9\" (UID: \"ed5f6b7c-b239-4aba-8c85-0ffdd29622da\") " pod="openstack/nova-api-db-create-ctmr9" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.878299 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqvgw\" (UniqueName: \"kubernetes.io/projected/64c51a3f-220f-4d41-a8ae-996c5d65da6a-kube-api-access-lqvgw\") pod \"nova-api-c6da-account-create-p9tnk\" (UID: \"64c51a3f-220f-4d41-a8ae-996c5d65da6a\") " pod="openstack/nova-api-c6da-account-create-p9tnk" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.878379 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed5f6b7c-b239-4aba-8c85-0ffdd29622da-operator-scripts\") pod \"nova-api-db-create-ctmr9\" (UID: \"ed5f6b7c-b239-4aba-8c85-0ffdd29622da\") " pod="openstack/nova-api-db-create-ctmr9" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.878421 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5cf51224-9407-44c8-805f-fcf18fa531a3-operator-scripts\") pod \"nova-cell0-db-create-j8qcn\" (UID: \"5cf51224-9407-44c8-805f-fcf18fa531a3\") " pod="openstack/nova-cell0-db-create-j8qcn" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.880868 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed5f6b7c-b239-4aba-8c85-0ffdd29622da-operator-scripts\") pod \"nova-api-db-create-ctmr9\" (UID: \"ed5f6b7c-b239-4aba-8c85-0ffdd29622da\") " pod="openstack/nova-api-db-create-ctmr9" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.881457 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64c51a3f-220f-4d41-a8ae-996c5d65da6a-operator-scripts\") pod \"nova-api-c6da-account-create-p9tnk\" (UID: \"64c51a3f-220f-4d41-a8ae-996c5d65da6a\") " pod="openstack/nova-api-c6da-account-create-p9tnk" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.905035 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7393-account-create-9cnk4"] Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.906908 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chxj2\" (UniqueName: \"kubernetes.io/projected/ed5f6b7c-b239-4aba-8c85-0ffdd29622da-kube-api-access-chxj2\") pod \"nova-api-db-create-ctmr9\" (UID: \"ed5f6b7c-b239-4aba-8c85-0ffdd29622da\") " pod="openstack/nova-api-db-create-ctmr9" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.907268 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqvgw\" (UniqueName: \"kubernetes.io/projected/64c51a3f-220f-4d41-a8ae-996c5d65da6a-kube-api-access-lqvgw\") pod \"nova-api-c6da-account-create-p9tnk\" (UID: \"64c51a3f-220f-4d41-a8ae-996c5d65da6a\") " pod="openstack/nova-api-c6da-account-create-p9tnk" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.913202 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c6da-account-create-p9tnk" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.923275 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ctmr9" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.936918 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-p4np9"] Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.990050 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-c017-account-create-lsfhl"] Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.994877 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnm2c\" (UniqueName: \"kubernetes.io/projected/030673ef-ec79-4f19-8f0e-765d6918cfc4-kube-api-access-fnm2c\") pod \"nova-cell0-7393-account-create-9cnk4\" (UID: \"030673ef-ec79-4f19-8f0e-765d6918cfc4\") " pod="openstack/nova-cell0-7393-account-create-9cnk4" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.994968 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5cf51224-9407-44c8-805f-fcf18fa531a3-operator-scripts\") pod \"nova-cell0-db-create-j8qcn\" (UID: \"5cf51224-9407-44c8-805f-fcf18fa531a3\") " pod="openstack/nova-cell0-db-create-j8qcn" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.995079 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/030673ef-ec79-4f19-8f0e-765d6918cfc4-operator-scripts\") pod \"nova-cell0-7393-account-create-9cnk4\" (UID: \"030673ef-ec79-4f19-8f0e-765d6918cfc4\") " pod="openstack/nova-cell0-7393-account-create-9cnk4" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.995128 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b85308a-ef27-494f-9bd3-b06c25118779-operator-scripts\") pod \"nova-cell1-db-create-p4np9\" (UID: \"2b85308a-ef27-494f-9bd3-b06c25118779\") " pod="openstack/nova-cell1-db-create-p4np9" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.995174 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c65sg\" (UniqueName: \"kubernetes.io/projected/5cf51224-9407-44c8-805f-fcf18fa531a3-kube-api-access-c65sg\") pod \"nova-cell0-db-create-j8qcn\" (UID: \"5cf51224-9407-44c8-805f-fcf18fa531a3\") " pod="openstack/nova-cell0-db-create-j8qcn" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.995205 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsxk9\" (UniqueName: \"kubernetes.io/projected/2b85308a-ef27-494f-9bd3-b06c25118779-kube-api-access-qsxk9\") pod \"nova-cell1-db-create-p4np9\" (UID: \"2b85308a-ef27-494f-9bd3-b06c25118779\") " pod="openstack/nova-cell1-db-create-p4np9" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.995664 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c017-account-create-lsfhl" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.996053 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5cf51224-9407-44c8-805f-fcf18fa531a3-operator-scripts\") pod \"nova-cell0-db-create-j8qcn\" (UID: \"5cf51224-9407-44c8-805f-fcf18fa531a3\") " pod="openstack/nova-cell0-db-create-j8qcn" Nov 25 11:57:21 crc kubenswrapper[4706]: I1125 11:57:21.999041 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 25 11:57:22 crc kubenswrapper[4706]: I1125 11:57:22.035942 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c65sg\" (UniqueName: \"kubernetes.io/projected/5cf51224-9407-44c8-805f-fcf18fa531a3-kube-api-access-c65sg\") pod \"nova-cell0-db-create-j8qcn\" (UID: \"5cf51224-9407-44c8-805f-fcf18fa531a3\") " pod="openstack/nova-cell0-db-create-j8qcn" Nov 25 11:57:22 crc kubenswrapper[4706]: I1125 11:57:22.039358 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c017-account-create-lsfhl"] Nov 25 11:57:22 crc kubenswrapper[4706]: I1125 11:57:22.097479 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b85308a-ef27-494f-9bd3-b06c25118779-operator-scripts\") pod \"nova-cell1-db-create-p4np9\" (UID: \"2b85308a-ef27-494f-9bd3-b06c25118779\") " pod="openstack/nova-cell1-db-create-p4np9" Nov 25 11:57:22 crc kubenswrapper[4706]: I1125 11:57:22.097539 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxhft\" (UniqueName: \"kubernetes.io/projected/acb4725a-1a34-4a3a-b578-7bcf44ff0bef-kube-api-access-rxhft\") pod \"nova-cell1-c017-account-create-lsfhl\" (UID: \"acb4725a-1a34-4a3a-b578-7bcf44ff0bef\") " pod="openstack/nova-cell1-c017-account-create-lsfhl" Nov 25 11:57:22 crc kubenswrapper[4706]: I1125 11:57:22.097608 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsxk9\" (UniqueName: \"kubernetes.io/projected/2b85308a-ef27-494f-9bd3-b06c25118779-kube-api-access-qsxk9\") pod \"nova-cell1-db-create-p4np9\" (UID: \"2b85308a-ef27-494f-9bd3-b06c25118779\") " pod="openstack/nova-cell1-db-create-p4np9" Nov 25 11:57:22 crc kubenswrapper[4706]: I1125 11:57:22.097735 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnm2c\" (UniqueName: \"kubernetes.io/projected/030673ef-ec79-4f19-8f0e-765d6918cfc4-kube-api-access-fnm2c\") pod \"nova-cell0-7393-account-create-9cnk4\" (UID: \"030673ef-ec79-4f19-8f0e-765d6918cfc4\") " pod="openstack/nova-cell0-7393-account-create-9cnk4" Nov 25 11:57:22 crc kubenswrapper[4706]: I1125 11:57:22.097813 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acb4725a-1a34-4a3a-b578-7bcf44ff0bef-operator-scripts\") pod \"nova-cell1-c017-account-create-lsfhl\" (UID: \"acb4725a-1a34-4a3a-b578-7bcf44ff0bef\") " pod="openstack/nova-cell1-c017-account-create-lsfhl" Nov 25 11:57:22 crc kubenswrapper[4706]: I1125 11:57:22.097924 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/030673ef-ec79-4f19-8f0e-765d6918cfc4-operator-scripts\") pod \"nova-cell0-7393-account-create-9cnk4\" (UID: \"030673ef-ec79-4f19-8f0e-765d6918cfc4\") " pod="openstack/nova-cell0-7393-account-create-9cnk4" Nov 25 11:57:22 crc kubenswrapper[4706]: I1125 11:57:22.099713 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/030673ef-ec79-4f19-8f0e-765d6918cfc4-operator-scripts\") pod \"nova-cell0-7393-account-create-9cnk4\" (UID: \"030673ef-ec79-4f19-8f0e-765d6918cfc4\") " pod="openstack/nova-cell0-7393-account-create-9cnk4" Nov 25 11:57:22 crc kubenswrapper[4706]: I1125 11:57:22.106600 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b85308a-ef27-494f-9bd3-b06c25118779-operator-scripts\") pod \"nova-cell1-db-create-p4np9\" (UID: \"2b85308a-ef27-494f-9bd3-b06c25118779\") " pod="openstack/nova-cell1-db-create-p4np9" Nov 25 11:57:22 crc kubenswrapper[4706]: I1125 11:57:22.114081 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnm2c\" (UniqueName: \"kubernetes.io/projected/030673ef-ec79-4f19-8f0e-765d6918cfc4-kube-api-access-fnm2c\") pod \"nova-cell0-7393-account-create-9cnk4\" (UID: \"030673ef-ec79-4f19-8f0e-765d6918cfc4\") " pod="openstack/nova-cell0-7393-account-create-9cnk4" Nov 25 11:57:22 crc kubenswrapper[4706]: I1125 11:57:22.114829 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-j8qcn" Nov 25 11:57:22 crc kubenswrapper[4706]: I1125 11:57:22.129431 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsxk9\" (UniqueName: \"kubernetes.io/projected/2b85308a-ef27-494f-9bd3-b06c25118779-kube-api-access-qsxk9\") pod \"nova-cell1-db-create-p4np9\" (UID: \"2b85308a-ef27-494f-9bd3-b06c25118779\") " pod="openstack/nova-cell1-db-create-p4np9" Nov 25 11:57:22 crc kubenswrapper[4706]: I1125 11:57:22.173181 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7393-account-create-9cnk4" Nov 25 11:57:22 crc kubenswrapper[4706]: I1125 11:57:22.195009 4706 generic.go:334] "Generic (PLEG): container finished" podID="601bd00e-ad4b-4952-aa81-5dd731ac2ca9" containerID="d7ce006fb12802230fb8535e664ca122b9a00b6e7c50ef2c6747512e7f75a1f6" exitCode=0 Nov 25 11:57:22 crc kubenswrapper[4706]: I1125 11:57:22.195352 4706 generic.go:334] "Generic (PLEG): container finished" podID="601bd00e-ad4b-4952-aa81-5dd731ac2ca9" containerID="5bcd86547380f09e59549796b1acc7b613fa143098c098f4462546839652fcef" exitCode=2 Nov 25 11:57:22 crc kubenswrapper[4706]: I1125 11:57:22.195364 4706 generic.go:334] "Generic (PLEG): container finished" podID="601bd00e-ad4b-4952-aa81-5dd731ac2ca9" containerID="651d6534dd298a0cff064a96f5d62a052c27d714416ac3950dcbf5499b5da76b" exitCode=0 Nov 25 11:57:22 crc kubenswrapper[4706]: I1125 11:57:22.195385 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"601bd00e-ad4b-4952-aa81-5dd731ac2ca9","Type":"ContainerDied","Data":"d7ce006fb12802230fb8535e664ca122b9a00b6e7c50ef2c6747512e7f75a1f6"} Nov 25 11:57:22 crc kubenswrapper[4706]: I1125 11:57:22.195454 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"601bd00e-ad4b-4952-aa81-5dd731ac2ca9","Type":"ContainerDied","Data":"5bcd86547380f09e59549796b1acc7b613fa143098c098f4462546839652fcef"} Nov 25 11:57:22 crc kubenswrapper[4706]: I1125 11:57:22.195466 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"601bd00e-ad4b-4952-aa81-5dd731ac2ca9","Type":"ContainerDied","Data":"651d6534dd298a0cff064a96f5d62a052c27d714416ac3950dcbf5499b5da76b"} Nov 25 11:57:22 crc kubenswrapper[4706]: I1125 11:57:22.204419 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acb4725a-1a34-4a3a-b578-7bcf44ff0bef-operator-scripts\") pod \"nova-cell1-c017-account-create-lsfhl\" (UID: \"acb4725a-1a34-4a3a-b578-7bcf44ff0bef\") " pod="openstack/nova-cell1-c017-account-create-lsfhl" Nov 25 11:57:22 crc kubenswrapper[4706]: I1125 11:57:22.205618 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acb4725a-1a34-4a3a-b578-7bcf44ff0bef-operator-scripts\") pod \"nova-cell1-c017-account-create-lsfhl\" (UID: \"acb4725a-1a34-4a3a-b578-7bcf44ff0bef\") " pod="openstack/nova-cell1-c017-account-create-lsfhl" Nov 25 11:57:22 crc kubenswrapper[4706]: I1125 11:57:22.205935 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxhft\" (UniqueName: \"kubernetes.io/projected/acb4725a-1a34-4a3a-b578-7bcf44ff0bef-kube-api-access-rxhft\") pod \"nova-cell1-c017-account-create-lsfhl\" (UID: \"acb4725a-1a34-4a3a-b578-7bcf44ff0bef\") " pod="openstack/nova-cell1-c017-account-create-lsfhl" Nov 25 11:57:22 crc kubenswrapper[4706]: I1125 11:57:22.207242 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-p4np9" Nov 25 11:57:22 crc kubenswrapper[4706]: I1125 11:57:22.224691 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxhft\" (UniqueName: \"kubernetes.io/projected/acb4725a-1a34-4a3a-b578-7bcf44ff0bef-kube-api-access-rxhft\") pod \"nova-cell1-c017-account-create-lsfhl\" (UID: \"acb4725a-1a34-4a3a-b578-7bcf44ff0bef\") " pod="openstack/nova-cell1-c017-account-create-lsfhl" Nov 25 11:57:22 crc kubenswrapper[4706]: I1125 11:57:22.374779 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c017-account-create-lsfhl" Nov 25 11:57:22 crc kubenswrapper[4706]: I1125 11:57:22.497074 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-ctmr9"] Nov 25 11:57:22 crc kubenswrapper[4706]: I1125 11:57:22.732095 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c6da-account-create-p9tnk"] Nov 25 11:57:22 crc kubenswrapper[4706]: I1125 11:57:22.830695 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7393-account-create-9cnk4"] Nov 25 11:57:22 crc kubenswrapper[4706]: W1125 11:57:22.848150 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod030673ef_ec79_4f19_8f0e_765d6918cfc4.slice/crio-aa95a4182f5f76f3031ea00acccdb10f5b8af61a3854cfce9ac1c5c3c9b25c87 WatchSource:0}: Error finding container aa95a4182f5f76f3031ea00acccdb10f5b8af61a3854cfce9ac1c5c3c9b25c87: Status 404 returned error can't find the container with id aa95a4182f5f76f3031ea00acccdb10f5b8af61a3854cfce9ac1c5c3c9b25c87 Nov 25 11:57:22 crc kubenswrapper[4706]: I1125 11:57:22.854211 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-j8qcn"] Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.070041 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-p4np9"] Nov 25 11:57:23 crc kubenswrapper[4706]: W1125 11:57:23.082654 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b85308a_ef27_494f_9bd3_b06c25118779.slice/crio-2ba79e75ec3714c05c831a3abd4b4ae371b4a47879fbf4a11e3ebb58503eb047 WatchSource:0}: Error finding container 2ba79e75ec3714c05c831a3abd4b4ae371b4a47879fbf4a11e3ebb58503eb047: Status 404 returned error can't find the container with id 2ba79e75ec3714c05c831a3abd4b4ae371b4a47879fbf4a11e3ebb58503eb047 Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.229378 4706 generic.go:334] "Generic (PLEG): container finished" podID="ed5f6b7c-b239-4aba-8c85-0ffdd29622da" containerID="219047ea03fefd7c8435a03c86efcef55b6d92b6b896bfc95e1ef026d7e2a4a4" exitCode=0 Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.229487 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ctmr9" event={"ID":"ed5f6b7c-b239-4aba-8c85-0ffdd29622da","Type":"ContainerDied","Data":"219047ea03fefd7c8435a03c86efcef55b6d92b6b896bfc95e1ef026d7e2a4a4"} Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.229521 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ctmr9" event={"ID":"ed5f6b7c-b239-4aba-8c85-0ffdd29622da","Type":"ContainerStarted","Data":"6fa4f14457a04ef7f2f5a592dd7717111c818021b929e47af5e2657c599fb947"} Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.230154 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c017-account-create-lsfhl"] Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.243673 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7393-account-create-9cnk4" event={"ID":"030673ef-ec79-4f19-8f0e-765d6918cfc4","Type":"ContainerStarted","Data":"a34f9431fa22b2dc3c7b7f13ce3cbec17941009dd68dc7fea7df7ae915f18e01"} Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.243725 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7393-account-create-9cnk4" event={"ID":"030673ef-ec79-4f19-8f0e-765d6918cfc4","Type":"ContainerStarted","Data":"aa95a4182f5f76f3031ea00acccdb10f5b8af61a3854cfce9ac1c5c3c9b25c87"} Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.276814 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-j8qcn" event={"ID":"5cf51224-9407-44c8-805f-fcf18fa531a3","Type":"ContainerStarted","Data":"88f1e76352714ce8c872235ff5a399be70da0ef7ea1a185268b10a6a9af56bf5"} Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.276858 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-j8qcn" event={"ID":"5cf51224-9407-44c8-805f-fcf18fa531a3","Type":"ContainerStarted","Data":"51537ed36294d7a025299f7d48f4cc5fd6d4e8a727966005361d81a6bfae99f9"} Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.296196 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c6da-account-create-p9tnk" event={"ID":"64c51a3f-220f-4d41-a8ae-996c5d65da6a","Type":"ContainerStarted","Data":"0016a1025cca850a91bb34fc6f50a9212f3a65a4f5be1bbde437a244faffa0de"} Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.296242 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c6da-account-create-p9tnk" event={"ID":"64c51a3f-220f-4d41-a8ae-996c5d65da6a","Type":"ContainerStarted","Data":"779ea79ee840b49931c0f5a604966317079a9c46ed33881f3a3d930613ba07e4"} Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.304096 4706 generic.go:334] "Generic (PLEG): container finished" podID="601bd00e-ad4b-4952-aa81-5dd731ac2ca9" containerID="d251c706b762c92dcb8e2ba62471e7b54ae10947ac1468c5131412316ce5fcd4" exitCode=0 Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.304176 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"601bd00e-ad4b-4952-aa81-5dd731ac2ca9","Type":"ContainerDied","Data":"d251c706b762c92dcb8e2ba62471e7b54ae10947ac1468c5131412316ce5fcd4"} Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.304471 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-7393-account-create-9cnk4" podStartSLOduration=2.304457541 podStartE2EDuration="2.304457541s" podCreationTimestamp="2025-11-25 11:57:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:57:23.257134477 +0000 UTC m=+1252.171691858" watchObservedRunningTime="2025-11-25 11:57:23.304457541 +0000 UTC m=+1252.219014922" Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.305596 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-p4np9" event={"ID":"2b85308a-ef27-494f-9bd3-b06c25118779","Type":"ContainerStarted","Data":"2ba79e75ec3714c05c831a3abd4b4ae371b4a47879fbf4a11e3ebb58503eb047"} Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.314274 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-j8qcn" podStartSLOduration=2.314249708 podStartE2EDuration="2.314249708s" podCreationTimestamp="2025-11-25 11:57:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:57:23.296446289 +0000 UTC m=+1252.211003670" watchObservedRunningTime="2025-11-25 11:57:23.314249708 +0000 UTC m=+1252.228807089" Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.335349 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-c6da-account-create-p9tnk" podStartSLOduration=2.3353270090000002 podStartE2EDuration="2.335327009s" podCreationTimestamp="2025-11-25 11:57:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:57:23.334655532 +0000 UTC m=+1252.249212913" watchObservedRunningTime="2025-11-25 11:57:23.335327009 +0000 UTC m=+1252.249884390" Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.553268 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.645563 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-scripts\") pod \"601bd00e-ad4b-4952-aa81-5dd731ac2ca9\" (UID: \"601bd00e-ad4b-4952-aa81-5dd731ac2ca9\") " Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.645924 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-log-httpd\") pod \"601bd00e-ad4b-4952-aa81-5dd731ac2ca9\" (UID: \"601bd00e-ad4b-4952-aa81-5dd731ac2ca9\") " Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.645962 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-sg-core-conf-yaml\") pod \"601bd00e-ad4b-4952-aa81-5dd731ac2ca9\" (UID: \"601bd00e-ad4b-4952-aa81-5dd731ac2ca9\") " Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.646035 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-config-data\") pod \"601bd00e-ad4b-4952-aa81-5dd731ac2ca9\" (UID: \"601bd00e-ad4b-4952-aa81-5dd731ac2ca9\") " Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.646062 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqqb9\" (UniqueName: \"kubernetes.io/projected/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-kube-api-access-wqqb9\") pod \"601bd00e-ad4b-4952-aa81-5dd731ac2ca9\" (UID: \"601bd00e-ad4b-4952-aa81-5dd731ac2ca9\") " Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.646097 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-combined-ca-bundle\") pod \"601bd00e-ad4b-4952-aa81-5dd731ac2ca9\" (UID: \"601bd00e-ad4b-4952-aa81-5dd731ac2ca9\") " Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.646139 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-run-httpd\") pod \"601bd00e-ad4b-4952-aa81-5dd731ac2ca9\" (UID: \"601bd00e-ad4b-4952-aa81-5dd731ac2ca9\") " Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.647320 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "601bd00e-ad4b-4952-aa81-5dd731ac2ca9" (UID: "601bd00e-ad4b-4952-aa81-5dd731ac2ca9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.648731 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "601bd00e-ad4b-4952-aa81-5dd731ac2ca9" (UID: "601bd00e-ad4b-4952-aa81-5dd731ac2ca9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.657657 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-kube-api-access-wqqb9" (OuterVolumeSpecName: "kube-api-access-wqqb9") pod "601bd00e-ad4b-4952-aa81-5dd731ac2ca9" (UID: "601bd00e-ad4b-4952-aa81-5dd731ac2ca9"). InnerVolumeSpecName "kube-api-access-wqqb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.701550 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-scripts" (OuterVolumeSpecName: "scripts") pod "601bd00e-ad4b-4952-aa81-5dd731ac2ca9" (UID: "601bd00e-ad4b-4952-aa81-5dd731ac2ca9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.714479 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "601bd00e-ad4b-4952-aa81-5dd731ac2ca9" (UID: "601bd00e-ad4b-4952-aa81-5dd731ac2ca9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.753384 4706 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.754569 4706 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.754606 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqqb9\" (UniqueName: \"kubernetes.io/projected/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-kube-api-access-wqqb9\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.754629 4706 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.754640 4706 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.801394 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "601bd00e-ad4b-4952-aa81-5dd731ac2ca9" (UID: "601bd00e-ad4b-4952-aa81-5dd731ac2ca9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.856109 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.877908 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-config-data" (OuterVolumeSpecName: "config-data") pod "601bd00e-ad4b-4952-aa81-5dd731ac2ca9" (UID: "601bd00e-ad4b-4952-aa81-5dd731ac2ca9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.913630 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 11:57:23 crc kubenswrapper[4706]: I1125 11:57:23.958607 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/601bd00e-ad4b-4952-aa81-5dd731ac2ca9-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.059281 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-public-tls-certs\") pod \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\" (UID: \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\") " Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.059382 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snl6k\" (UniqueName: \"kubernetes.io/projected/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-kube-api-access-snl6k\") pod \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\" (UID: \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\") " Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.059471 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-combined-ca-bundle\") pod \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\" (UID: \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\") " Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.059599 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-scripts\") pod \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\" (UID: \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\") " Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.059643 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-logs\") pod \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\" (UID: \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\") " Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.059724 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\" (UID: \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\") " Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.059783 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-httpd-run\") pod \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\" (UID: \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\") " Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.059813 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-config-data\") pod \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\" (UID: \"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f\") " Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.060318 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-logs" (OuterVolumeSpecName: "logs") pod "6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f" (UID: "6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.060509 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f" (UID: "6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.060581 4706 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.060601 4706 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-logs\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.066533 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-scripts" (OuterVolumeSpecName: "scripts") pod "6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f" (UID: "6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.067003 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-kube-api-access-snl6k" (OuterVolumeSpecName: "kube-api-access-snl6k") pod "6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f" (UID: "6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f"). InnerVolumeSpecName "kube-api-access-snl6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.070461 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f" (UID: "6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.107433 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f" (UID: "6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.146004 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f" (UID: "6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.161566 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-config-data" (OuterVolumeSpecName: "config-data") pod "6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f" (UID: "6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.162873 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snl6k\" (UniqueName: \"kubernetes.io/projected/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-kube-api-access-snl6k\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.162919 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.162931 4706 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.162967 4706 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.162977 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.162988 4706 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.206112 4706 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.264628 4706 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.331671 4706 generic.go:334] "Generic (PLEG): container finished" podID="6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f" containerID="cea2a1a48ebbafa7abdc43558125cc84b06d937577b4fc75c50451664c420801" exitCode=0 Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.331726 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f","Type":"ContainerDied","Data":"cea2a1a48ebbafa7abdc43558125cc84b06d937577b4fc75c50451664c420801"} Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.331752 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f","Type":"ContainerDied","Data":"15dbd546f22d882eda5dae12c0821ef606a618ec396a6d36996fb7875d89239d"} Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.331769 4706 scope.go:117] "RemoveContainer" containerID="cea2a1a48ebbafa7abdc43558125cc84b06d937577b4fc75c50451664c420801" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.331876 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.344563 4706 generic.go:334] "Generic (PLEG): container finished" podID="64c51a3f-220f-4d41-a8ae-996c5d65da6a" containerID="0016a1025cca850a91bb34fc6f50a9212f3a65a4f5be1bbde437a244faffa0de" exitCode=0 Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.344732 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c6da-account-create-p9tnk" event={"ID":"64c51a3f-220f-4d41-a8ae-996c5d65da6a","Type":"ContainerDied","Data":"0016a1025cca850a91bb34fc6f50a9212f3a65a4f5be1bbde437a244faffa0de"} Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.354199 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"601bd00e-ad4b-4952-aa81-5dd731ac2ca9","Type":"ContainerDied","Data":"b51bb13bfa7a754bfc8c98946382ebc3ee3c10c764054a96baf0404194151d1f"} Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.354351 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.359251 4706 generic.go:334] "Generic (PLEG): container finished" podID="2b85308a-ef27-494f-9bd3-b06c25118779" containerID="e7d3108737da713897d8ab0532f1849a9ad5b4268db2f845f4aa68e039fae815" exitCode=0 Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.359347 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-p4np9" event={"ID":"2b85308a-ef27-494f-9bd3-b06c25118779","Type":"ContainerDied","Data":"e7d3108737da713897d8ab0532f1849a9ad5b4268db2f845f4aa68e039fae815"} Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.373651 4706 generic.go:334] "Generic (PLEG): container finished" podID="9392449e-c392-4d77-b36a-67b6d8c716c7" containerID="ad219d52a5cb7380348da742495450a2737dd6d4946c87d7529be684c28d8619" exitCode=0 Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.373760 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9392449e-c392-4d77-b36a-67b6d8c716c7","Type":"ContainerDied","Data":"ad219d52a5cb7380348da742495450a2737dd6d4946c87d7529be684c28d8619"} Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.384180 4706 generic.go:334] "Generic (PLEG): container finished" podID="030673ef-ec79-4f19-8f0e-765d6918cfc4" containerID="a34f9431fa22b2dc3c7b7f13ce3cbec17941009dd68dc7fea7df7ae915f18e01" exitCode=0 Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.384246 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7393-account-create-9cnk4" event={"ID":"030673ef-ec79-4f19-8f0e-765d6918cfc4","Type":"ContainerDied","Data":"a34f9431fa22b2dc3c7b7f13ce3cbec17941009dd68dc7fea7df7ae915f18e01"} Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.387511 4706 scope.go:117] "RemoveContainer" containerID="b5e1097ae896ce3cc97fa565106e38e6095eb00fc75f3d3d729b4dea2824be11" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.396246 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c017-account-create-lsfhl" event={"ID":"acb4725a-1a34-4a3a-b578-7bcf44ff0bef","Type":"ContainerStarted","Data":"e901939ebf66885634d91216cfaa95a1b9d4c974734e90d8c89c16138110de14"} Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.396440 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c017-account-create-lsfhl" event={"ID":"acb4725a-1a34-4a3a-b578-7bcf44ff0bef","Type":"ContainerStarted","Data":"48f438a0257671d1ed1ae3ae4dea94b5bfd329e723fdfe5bdb4a38814784e782"} Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.411244 4706 generic.go:334] "Generic (PLEG): container finished" podID="5cf51224-9407-44c8-805f-fcf18fa531a3" containerID="88f1e76352714ce8c872235ff5a399be70da0ef7ea1a185268b10a6a9af56bf5" exitCode=0 Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.411520 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-j8qcn" event={"ID":"5cf51224-9407-44c8-805f-fcf18fa531a3","Type":"ContainerDied","Data":"88f1e76352714ce8c872235ff5a399be70da0ef7ea1a185268b10a6a9af56bf5"} Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.443699 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.456030 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.470051 4706 scope.go:117] "RemoveContainer" containerID="cea2a1a48ebbafa7abdc43558125cc84b06d937577b4fc75c50451664c420801" Nov 25 11:57:24 crc kubenswrapper[4706]: E1125 11:57:24.471801 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cea2a1a48ebbafa7abdc43558125cc84b06d937577b4fc75c50451664c420801\": container with ID starting with cea2a1a48ebbafa7abdc43558125cc84b06d937577b4fc75c50451664c420801 not found: ID does not exist" containerID="cea2a1a48ebbafa7abdc43558125cc84b06d937577b4fc75c50451664c420801" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.471831 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cea2a1a48ebbafa7abdc43558125cc84b06d937577b4fc75c50451664c420801"} err="failed to get container status \"cea2a1a48ebbafa7abdc43558125cc84b06d937577b4fc75c50451664c420801\": rpc error: code = NotFound desc = could not find container \"cea2a1a48ebbafa7abdc43558125cc84b06d937577b4fc75c50451664c420801\": container with ID starting with cea2a1a48ebbafa7abdc43558125cc84b06d937577b4fc75c50451664c420801 not found: ID does not exist" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.471852 4706 scope.go:117] "RemoveContainer" containerID="b5e1097ae896ce3cc97fa565106e38e6095eb00fc75f3d3d729b4dea2824be11" Nov 25 11:57:24 crc kubenswrapper[4706]: E1125 11:57:24.479061 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5e1097ae896ce3cc97fa565106e38e6095eb00fc75f3d3d729b4dea2824be11\": container with ID starting with b5e1097ae896ce3cc97fa565106e38e6095eb00fc75f3d3d729b4dea2824be11 not found: ID does not exist" containerID="b5e1097ae896ce3cc97fa565106e38e6095eb00fc75f3d3d729b4dea2824be11" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.479106 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5e1097ae896ce3cc97fa565106e38e6095eb00fc75f3d3d729b4dea2824be11"} err="failed to get container status \"b5e1097ae896ce3cc97fa565106e38e6095eb00fc75f3d3d729b4dea2824be11\": rpc error: code = NotFound desc = could not find container \"b5e1097ae896ce3cc97fa565106e38e6095eb00fc75f3d3d729b4dea2824be11\": container with ID starting with b5e1097ae896ce3cc97fa565106e38e6095eb00fc75f3d3d729b4dea2824be11 not found: ID does not exist" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.479127 4706 scope.go:117] "RemoveContainer" containerID="d7ce006fb12802230fb8535e664ca122b9a00b6e7c50ef2c6747512e7f75a1f6" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.480087 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 11:57:24 crc kubenswrapper[4706]: E1125 11:57:24.480492 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="601bd00e-ad4b-4952-aa81-5dd731ac2ca9" containerName="proxy-httpd" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.480508 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="601bd00e-ad4b-4952-aa81-5dd731ac2ca9" containerName="proxy-httpd" Nov 25 11:57:24 crc kubenswrapper[4706]: E1125 11:57:24.480535 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f" containerName="glance-httpd" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.480543 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f" containerName="glance-httpd" Nov 25 11:57:24 crc kubenswrapper[4706]: E1125 11:57:24.480556 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="601bd00e-ad4b-4952-aa81-5dd731ac2ca9" containerName="ceilometer-notification-agent" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.480564 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="601bd00e-ad4b-4952-aa81-5dd731ac2ca9" containerName="ceilometer-notification-agent" Nov 25 11:57:24 crc kubenswrapper[4706]: E1125 11:57:24.480573 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="601bd00e-ad4b-4952-aa81-5dd731ac2ca9" containerName="sg-core" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.480578 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="601bd00e-ad4b-4952-aa81-5dd731ac2ca9" containerName="sg-core" Nov 25 11:57:24 crc kubenswrapper[4706]: E1125 11:57:24.480592 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="601bd00e-ad4b-4952-aa81-5dd731ac2ca9" containerName="ceilometer-central-agent" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.480599 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="601bd00e-ad4b-4952-aa81-5dd731ac2ca9" containerName="ceilometer-central-agent" Nov 25 11:57:24 crc kubenswrapper[4706]: E1125 11:57:24.480608 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f" containerName="glance-log" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.480613 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f" containerName="glance-log" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.480787 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f" containerName="glance-log" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.480799 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="601bd00e-ad4b-4952-aa81-5dd731ac2ca9" containerName="ceilometer-central-agent" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.480811 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f" containerName="glance-httpd" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.480821 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="601bd00e-ad4b-4952-aa81-5dd731ac2ca9" containerName="proxy-httpd" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.480828 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="601bd00e-ad4b-4952-aa81-5dd731ac2ca9" containerName="sg-core" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.480836 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="601bd00e-ad4b-4952-aa81-5dd731ac2ca9" containerName="ceilometer-notification-agent" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.481784 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.483861 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.485617 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.498117 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.509286 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.524774 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.549370 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.552145 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.554729 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.554917 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.565918 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.573187 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-c017-account-create-lsfhl" podStartSLOduration=3.573171628 podStartE2EDuration="3.573171628s" podCreationTimestamp="2025-11-25 11:57:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:57:24.517642888 +0000 UTC m=+1253.432200269" watchObservedRunningTime="2025-11-25 11:57:24.573171628 +0000 UTC m=+1253.487729009" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.604765 4706 scope.go:117] "RemoveContainer" containerID="5bcd86547380f09e59549796b1acc7b613fa143098c098f4462546839652fcef" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.690022 4706 scope.go:117] "RemoveContainer" containerID="651d6534dd298a0cff064a96f5d62a052c27d714416ac3950dcbf5499b5da76b" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.692169 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0c5bfae-397f-432d-bdb6-8bb27d43f68c-logs\") pod \"glance-default-external-api-0\" (UID: \"d0c5bfae-397f-432d-bdb6-8bb27d43f68c\") " pod="openstack/glance-default-external-api-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.692222 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md2p6\" (UniqueName: \"kubernetes.io/projected/d0c5bfae-397f-432d-bdb6-8bb27d43f68c-kube-api-access-md2p6\") pod \"glance-default-external-api-0\" (UID: \"d0c5bfae-397f-432d-bdb6-8bb27d43f68c\") " pod="openstack/glance-default-external-api-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.692262 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzgxs\" (UniqueName: \"kubernetes.io/projected/8d9dc228-d004-4180-9f22-bebb77ae0fe1-kube-api-access-gzgxs\") pod \"ceilometer-0\" (UID: \"8d9dc228-d004-4180-9f22-bebb77ae0fe1\") " pod="openstack/ceilometer-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.692373 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d9dc228-d004-4180-9f22-bebb77ae0fe1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8d9dc228-d004-4180-9f22-bebb77ae0fe1\") " pod="openstack/ceilometer-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.692410 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d9dc228-d004-4180-9f22-bebb77ae0fe1-run-httpd\") pod \"ceilometer-0\" (UID: \"8d9dc228-d004-4180-9f22-bebb77ae0fe1\") " pod="openstack/ceilometer-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.692447 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"d0c5bfae-397f-432d-bdb6-8bb27d43f68c\") " pod="openstack/glance-default-external-api-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.692467 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d9dc228-d004-4180-9f22-bebb77ae0fe1-config-data\") pod \"ceilometer-0\" (UID: \"8d9dc228-d004-4180-9f22-bebb77ae0fe1\") " pod="openstack/ceilometer-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.692484 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d9dc228-d004-4180-9f22-bebb77ae0fe1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8d9dc228-d004-4180-9f22-bebb77ae0fe1\") " pod="openstack/ceilometer-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.694169 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d0c5bfae-397f-432d-bdb6-8bb27d43f68c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d0c5bfae-397f-432d-bdb6-8bb27d43f68c\") " pod="openstack/glance-default-external-api-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.694232 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0c5bfae-397f-432d-bdb6-8bb27d43f68c-config-data\") pod \"glance-default-external-api-0\" (UID: \"d0c5bfae-397f-432d-bdb6-8bb27d43f68c\") " pod="openstack/glance-default-external-api-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.694309 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d9dc228-d004-4180-9f22-bebb77ae0fe1-scripts\") pod \"ceilometer-0\" (UID: \"8d9dc228-d004-4180-9f22-bebb77ae0fe1\") " pod="openstack/ceilometer-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.694345 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d9dc228-d004-4180-9f22-bebb77ae0fe1-log-httpd\") pod \"ceilometer-0\" (UID: \"8d9dc228-d004-4180-9f22-bebb77ae0fe1\") " pod="openstack/ceilometer-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.694407 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0c5bfae-397f-432d-bdb6-8bb27d43f68c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d0c5bfae-397f-432d-bdb6-8bb27d43f68c\") " pod="openstack/glance-default-external-api-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.694478 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0c5bfae-397f-432d-bdb6-8bb27d43f68c-scripts\") pod \"glance-default-external-api-0\" (UID: \"d0c5bfae-397f-432d-bdb6-8bb27d43f68c\") " pod="openstack/glance-default-external-api-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.694498 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0c5bfae-397f-432d-bdb6-8bb27d43f68c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d0c5bfae-397f-432d-bdb6-8bb27d43f68c\") " pod="openstack/glance-default-external-api-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.796800 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"d0c5bfae-397f-432d-bdb6-8bb27d43f68c\") " pod="openstack/glance-default-external-api-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.797107 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d9dc228-d004-4180-9f22-bebb77ae0fe1-config-data\") pod \"ceilometer-0\" (UID: \"8d9dc228-d004-4180-9f22-bebb77ae0fe1\") " pod="openstack/ceilometer-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.797129 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d9dc228-d004-4180-9f22-bebb77ae0fe1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8d9dc228-d004-4180-9f22-bebb77ae0fe1\") " pod="openstack/ceilometer-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.797150 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d0c5bfae-397f-432d-bdb6-8bb27d43f68c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d0c5bfae-397f-432d-bdb6-8bb27d43f68c\") " pod="openstack/glance-default-external-api-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.797176 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0c5bfae-397f-432d-bdb6-8bb27d43f68c-config-data\") pod \"glance-default-external-api-0\" (UID: \"d0c5bfae-397f-432d-bdb6-8bb27d43f68c\") " pod="openstack/glance-default-external-api-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.797209 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d9dc228-d004-4180-9f22-bebb77ae0fe1-scripts\") pod \"ceilometer-0\" (UID: \"8d9dc228-d004-4180-9f22-bebb77ae0fe1\") " pod="openstack/ceilometer-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.797226 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d9dc228-d004-4180-9f22-bebb77ae0fe1-log-httpd\") pod \"ceilometer-0\" (UID: \"8d9dc228-d004-4180-9f22-bebb77ae0fe1\") " pod="openstack/ceilometer-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.797254 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0c5bfae-397f-432d-bdb6-8bb27d43f68c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d0c5bfae-397f-432d-bdb6-8bb27d43f68c\") " pod="openstack/glance-default-external-api-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.797286 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0c5bfae-397f-432d-bdb6-8bb27d43f68c-scripts\") pod \"glance-default-external-api-0\" (UID: \"d0c5bfae-397f-432d-bdb6-8bb27d43f68c\") " pod="openstack/glance-default-external-api-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.797317 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0c5bfae-397f-432d-bdb6-8bb27d43f68c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d0c5bfae-397f-432d-bdb6-8bb27d43f68c\") " pod="openstack/glance-default-external-api-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.797332 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0c5bfae-397f-432d-bdb6-8bb27d43f68c-logs\") pod \"glance-default-external-api-0\" (UID: \"d0c5bfae-397f-432d-bdb6-8bb27d43f68c\") " pod="openstack/glance-default-external-api-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.797363 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md2p6\" (UniqueName: \"kubernetes.io/projected/d0c5bfae-397f-432d-bdb6-8bb27d43f68c-kube-api-access-md2p6\") pod \"glance-default-external-api-0\" (UID: \"d0c5bfae-397f-432d-bdb6-8bb27d43f68c\") " pod="openstack/glance-default-external-api-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.797393 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzgxs\" (UniqueName: \"kubernetes.io/projected/8d9dc228-d004-4180-9f22-bebb77ae0fe1-kube-api-access-gzgxs\") pod \"ceilometer-0\" (UID: \"8d9dc228-d004-4180-9f22-bebb77ae0fe1\") " pod="openstack/ceilometer-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.797432 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d9dc228-d004-4180-9f22-bebb77ae0fe1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8d9dc228-d004-4180-9f22-bebb77ae0fe1\") " pod="openstack/ceilometer-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.797463 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d9dc228-d004-4180-9f22-bebb77ae0fe1-run-httpd\") pod \"ceilometer-0\" (UID: \"8d9dc228-d004-4180-9f22-bebb77ae0fe1\") " pod="openstack/ceilometer-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.798161 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d9dc228-d004-4180-9f22-bebb77ae0fe1-run-httpd\") pod \"ceilometer-0\" (UID: \"8d9dc228-d004-4180-9f22-bebb77ae0fe1\") " pod="openstack/ceilometer-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.798269 4706 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"d0c5bfae-397f-432d-bdb6-8bb27d43f68c\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.813077 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0c5bfae-397f-432d-bdb6-8bb27d43f68c-logs\") pod \"glance-default-external-api-0\" (UID: \"d0c5bfae-397f-432d-bdb6-8bb27d43f68c\") " pod="openstack/glance-default-external-api-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.821977 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d0c5bfae-397f-432d-bdb6-8bb27d43f68c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d0c5bfae-397f-432d-bdb6-8bb27d43f68c\") " pod="openstack/glance-default-external-api-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.828246 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d9dc228-d004-4180-9f22-bebb77ae0fe1-log-httpd\") pod \"ceilometer-0\" (UID: \"8d9dc228-d004-4180-9f22-bebb77ae0fe1\") " pod="openstack/ceilometer-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.838660 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0c5bfae-397f-432d-bdb6-8bb27d43f68c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d0c5bfae-397f-432d-bdb6-8bb27d43f68c\") " pod="openstack/glance-default-external-api-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.850955 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0c5bfae-397f-432d-bdb6-8bb27d43f68c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d0c5bfae-397f-432d-bdb6-8bb27d43f68c\") " pod="openstack/glance-default-external-api-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.853323 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md2p6\" (UniqueName: \"kubernetes.io/projected/d0c5bfae-397f-432d-bdb6-8bb27d43f68c-kube-api-access-md2p6\") pod \"glance-default-external-api-0\" (UID: \"d0c5bfae-397f-432d-bdb6-8bb27d43f68c\") " pod="openstack/glance-default-external-api-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.855510 4706 scope.go:117] "RemoveContainer" containerID="d251c706b762c92dcb8e2ba62471e7b54ae10947ac1468c5131412316ce5fcd4" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.856005 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d9dc228-d004-4180-9f22-bebb77ae0fe1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8d9dc228-d004-4180-9f22-bebb77ae0fe1\") " pod="openstack/ceilometer-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.856270 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d9dc228-d004-4180-9f22-bebb77ae0fe1-scripts\") pod \"ceilometer-0\" (UID: \"8d9dc228-d004-4180-9f22-bebb77ae0fe1\") " pod="openstack/ceilometer-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.856431 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d9dc228-d004-4180-9f22-bebb77ae0fe1-config-data\") pod \"ceilometer-0\" (UID: \"8d9dc228-d004-4180-9f22-bebb77ae0fe1\") " pod="openstack/ceilometer-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.856441 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0c5bfae-397f-432d-bdb6-8bb27d43f68c-scripts\") pod \"glance-default-external-api-0\" (UID: \"d0c5bfae-397f-432d-bdb6-8bb27d43f68c\") " pod="openstack/glance-default-external-api-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.864007 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"d0c5bfae-397f-432d-bdb6-8bb27d43f68c\") " pod="openstack/glance-default-external-api-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.864829 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzgxs\" (UniqueName: \"kubernetes.io/projected/8d9dc228-d004-4180-9f22-bebb77ae0fe1-kube-api-access-gzgxs\") pod \"ceilometer-0\" (UID: \"8d9dc228-d004-4180-9f22-bebb77ae0fe1\") " pod="openstack/ceilometer-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.865160 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d9dc228-d004-4180-9f22-bebb77ae0fe1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8d9dc228-d004-4180-9f22-bebb77ae0fe1\") " pod="openstack/ceilometer-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.886765 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.909821 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0c5bfae-397f-432d-bdb6-8bb27d43f68c-config-data\") pod \"glance-default-external-api-0\" (UID: \"d0c5bfae-397f-432d-bdb6-8bb27d43f68c\") " pod="openstack/glance-default-external-api-0" Nov 25 11:57:24 crc kubenswrapper[4706]: I1125 11:57:24.910407 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.002020 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ctmr9" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.005949 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rn8k\" (UniqueName: \"kubernetes.io/projected/9392449e-c392-4d77-b36a-67b6d8c716c7-kube-api-access-7rn8k\") pod \"9392449e-c392-4d77-b36a-67b6d8c716c7\" (UID: \"9392449e-c392-4d77-b36a-67b6d8c716c7\") " Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.006056 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9392449e-c392-4d77-b36a-67b6d8c716c7-internal-tls-certs\") pod \"9392449e-c392-4d77-b36a-67b6d8c716c7\" (UID: \"9392449e-c392-4d77-b36a-67b6d8c716c7\") " Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.006105 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"9392449e-c392-4d77-b36a-67b6d8c716c7\" (UID: \"9392449e-c392-4d77-b36a-67b6d8c716c7\") " Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.006190 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9392449e-c392-4d77-b36a-67b6d8c716c7-logs\") pod \"9392449e-c392-4d77-b36a-67b6d8c716c7\" (UID: \"9392449e-c392-4d77-b36a-67b6d8c716c7\") " Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.006227 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9392449e-c392-4d77-b36a-67b6d8c716c7-config-data\") pod \"9392449e-c392-4d77-b36a-67b6d8c716c7\" (UID: \"9392449e-c392-4d77-b36a-67b6d8c716c7\") " Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.006349 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9392449e-c392-4d77-b36a-67b6d8c716c7-combined-ca-bundle\") pod \"9392449e-c392-4d77-b36a-67b6d8c716c7\" (UID: \"9392449e-c392-4d77-b36a-67b6d8c716c7\") " Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.006400 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9392449e-c392-4d77-b36a-67b6d8c716c7-scripts\") pod \"9392449e-c392-4d77-b36a-67b6d8c716c7\" (UID: \"9392449e-c392-4d77-b36a-67b6d8c716c7\") " Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.006435 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9392449e-c392-4d77-b36a-67b6d8c716c7-httpd-run\") pod \"9392449e-c392-4d77-b36a-67b6d8c716c7\" (UID: \"9392449e-c392-4d77-b36a-67b6d8c716c7\") " Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.007273 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9392449e-c392-4d77-b36a-67b6d8c716c7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9392449e-c392-4d77-b36a-67b6d8c716c7" (UID: "9392449e-c392-4d77-b36a-67b6d8c716c7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.009255 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9392449e-c392-4d77-b36a-67b6d8c716c7-logs" (OuterVolumeSpecName: "logs") pod "9392449e-c392-4d77-b36a-67b6d8c716c7" (UID: "9392449e-c392-4d77-b36a-67b6d8c716c7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.013340 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9392449e-c392-4d77-b36a-67b6d8c716c7-kube-api-access-7rn8k" (OuterVolumeSpecName: "kube-api-access-7rn8k") pod "9392449e-c392-4d77-b36a-67b6d8c716c7" (UID: "9392449e-c392-4d77-b36a-67b6d8c716c7"). InnerVolumeSpecName "kube-api-access-7rn8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.022452 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "9392449e-c392-4d77-b36a-67b6d8c716c7" (UID: "9392449e-c392-4d77-b36a-67b6d8c716c7"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.024623 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9392449e-c392-4d77-b36a-67b6d8c716c7-scripts" (OuterVolumeSpecName: "scripts") pod "9392449e-c392-4d77-b36a-67b6d8c716c7" (UID: "9392449e-c392-4d77-b36a-67b6d8c716c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.102130 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9392449e-c392-4d77-b36a-67b6d8c716c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9392449e-c392-4d77-b36a-67b6d8c716c7" (UID: "9392449e-c392-4d77-b36a-67b6d8c716c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.108988 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed5f6b7c-b239-4aba-8c85-0ffdd29622da-operator-scripts\") pod \"ed5f6b7c-b239-4aba-8c85-0ffdd29622da\" (UID: \"ed5f6b7c-b239-4aba-8c85-0ffdd29622da\") " Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.109122 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chxj2\" (UniqueName: \"kubernetes.io/projected/ed5f6b7c-b239-4aba-8c85-0ffdd29622da-kube-api-access-chxj2\") pod \"ed5f6b7c-b239-4aba-8c85-0ffdd29622da\" (UID: \"ed5f6b7c-b239-4aba-8c85-0ffdd29622da\") " Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.111057 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed5f6b7c-b239-4aba-8c85-0ffdd29622da-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ed5f6b7c-b239-4aba-8c85-0ffdd29622da" (UID: "ed5f6b7c-b239-4aba-8c85-0ffdd29622da"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.112115 4706 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed5f6b7c-b239-4aba-8c85-0ffdd29622da-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.112429 4706 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9392449e-c392-4d77-b36a-67b6d8c716c7-logs\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.112448 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9392449e-c392-4d77-b36a-67b6d8c716c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.112459 4706 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9392449e-c392-4d77-b36a-67b6d8c716c7-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.112470 4706 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9392449e-c392-4d77-b36a-67b6d8c716c7-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.112482 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rn8k\" (UniqueName: \"kubernetes.io/projected/9392449e-c392-4d77-b36a-67b6d8c716c7-kube-api-access-7rn8k\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.115492 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed5f6b7c-b239-4aba-8c85-0ffdd29622da-kube-api-access-chxj2" (OuterVolumeSpecName: "kube-api-access-chxj2") pod "ed5f6b7c-b239-4aba-8c85-0ffdd29622da" (UID: "ed5f6b7c-b239-4aba-8c85-0ffdd29622da"). InnerVolumeSpecName "kube-api-access-chxj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.125518 4706 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.133553 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9392449e-c392-4d77-b36a-67b6d8c716c7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9392449e-c392-4d77-b36a-67b6d8c716c7" (UID: "9392449e-c392-4d77-b36a-67b6d8c716c7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.141768 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9392449e-c392-4d77-b36a-67b6d8c716c7-config-data" (OuterVolumeSpecName: "config-data") pod "9392449e-c392-4d77-b36a-67b6d8c716c7" (UID: "9392449e-c392-4d77-b36a-67b6d8c716c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.144325 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.148460 4706 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.229475 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chxj2\" (UniqueName: \"kubernetes.io/projected/ed5f6b7c-b239-4aba-8c85-0ffdd29622da-kube-api-access-chxj2\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.229505 4706 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9392449e-c392-4d77-b36a-67b6d8c716c7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.229518 4706 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.229527 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9392449e-c392-4d77-b36a-67b6d8c716c7-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.387535 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.437812 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9392449e-c392-4d77-b36a-67b6d8c716c7","Type":"ContainerDied","Data":"0b63784f4e9d790670ac0533e443398bfd97f89108e44b97598fc8eedd2ed3a0"} Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.437864 4706 scope.go:117] "RemoveContainer" containerID="ad219d52a5cb7380348da742495450a2737dd6d4946c87d7529be684c28d8619" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.437824 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.447368 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ctmr9" event={"ID":"ed5f6b7c-b239-4aba-8c85-0ffdd29622da","Type":"ContainerDied","Data":"6fa4f14457a04ef7f2f5a592dd7717111c818021b929e47af5e2657c599fb947"} Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.447439 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fa4f14457a04ef7f2f5a592dd7717111c818021b929e47af5e2657c599fb947" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.447616 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ctmr9" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.457254 4706 generic.go:334] "Generic (PLEG): container finished" podID="acb4725a-1a34-4a3a-b578-7bcf44ff0bef" containerID="e901939ebf66885634d91216cfaa95a1b9d4c974734e90d8c89c16138110de14" exitCode=0 Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.457383 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c017-account-create-lsfhl" event={"ID":"acb4725a-1a34-4a3a-b578-7bcf44ff0bef","Type":"ContainerDied","Data":"e901939ebf66885634d91216cfaa95a1b9d4c974734e90d8c89c16138110de14"} Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.482481 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.512181 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.531858 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 11:57:25 crc kubenswrapper[4706]: E1125 11:57:25.532437 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9392449e-c392-4d77-b36a-67b6d8c716c7" containerName="glance-log" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.532454 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="9392449e-c392-4d77-b36a-67b6d8c716c7" containerName="glance-log" Nov 25 11:57:25 crc kubenswrapper[4706]: E1125 11:57:25.532489 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed5f6b7c-b239-4aba-8c85-0ffdd29622da" containerName="mariadb-database-create" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.532497 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed5f6b7c-b239-4aba-8c85-0ffdd29622da" containerName="mariadb-database-create" Nov 25 11:57:25 crc kubenswrapper[4706]: E1125 11:57:25.532516 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9392449e-c392-4d77-b36a-67b6d8c716c7" containerName="glance-httpd" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.532524 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="9392449e-c392-4d77-b36a-67b6d8c716c7" containerName="glance-httpd" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.532740 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed5f6b7c-b239-4aba-8c85-0ffdd29622da" containerName="mariadb-database-create" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.532783 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="9392449e-c392-4d77-b36a-67b6d8c716c7" containerName="glance-log" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.532794 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="9392449e-c392-4d77-b36a-67b6d8c716c7" containerName="glance-httpd" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.538186 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.539888 4706 scope.go:117] "RemoveContainer" containerID="17525079762a657aaaa7ddedbe78c41ea63e1654951381a5ee6b864ec29cb169" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.544090 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.544691 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.567154 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.602754 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.657742 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56ae92e0-a5ff-4b66-b471-6e38781e51da-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"56ae92e0-a5ff-4b66-b471-6e38781e51da\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.657796 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56ae92e0-a5ff-4b66-b471-6e38781e51da-scripts\") pod \"glance-default-internal-api-0\" (UID: \"56ae92e0-a5ff-4b66-b471-6e38781e51da\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.657837 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2l9s\" (UniqueName: \"kubernetes.io/projected/56ae92e0-a5ff-4b66-b471-6e38781e51da-kube-api-access-h2l9s\") pod \"glance-default-internal-api-0\" (UID: \"56ae92e0-a5ff-4b66-b471-6e38781e51da\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.657860 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56ae92e0-a5ff-4b66-b471-6e38781e51da-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"56ae92e0-a5ff-4b66-b471-6e38781e51da\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.657914 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56ae92e0-a5ff-4b66-b471-6e38781e51da-logs\") pod \"glance-default-internal-api-0\" (UID: \"56ae92e0-a5ff-4b66-b471-6e38781e51da\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.657963 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"56ae92e0-a5ff-4b66-b471-6e38781e51da\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.658010 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56ae92e0-a5ff-4b66-b471-6e38781e51da-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"56ae92e0-a5ff-4b66-b471-6e38781e51da\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.658055 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56ae92e0-a5ff-4b66-b471-6e38781e51da-config-data\") pod \"glance-default-internal-api-0\" (UID: \"56ae92e0-a5ff-4b66-b471-6e38781e51da\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.759225 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56ae92e0-a5ff-4b66-b471-6e38781e51da-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"56ae92e0-a5ff-4b66-b471-6e38781e51da\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.759367 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56ae92e0-a5ff-4b66-b471-6e38781e51da-config-data\") pod \"glance-default-internal-api-0\" (UID: \"56ae92e0-a5ff-4b66-b471-6e38781e51da\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.759463 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56ae92e0-a5ff-4b66-b471-6e38781e51da-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"56ae92e0-a5ff-4b66-b471-6e38781e51da\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.759497 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56ae92e0-a5ff-4b66-b471-6e38781e51da-scripts\") pod \"glance-default-internal-api-0\" (UID: \"56ae92e0-a5ff-4b66-b471-6e38781e51da\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.759544 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2l9s\" (UniqueName: \"kubernetes.io/projected/56ae92e0-a5ff-4b66-b471-6e38781e51da-kube-api-access-h2l9s\") pod \"glance-default-internal-api-0\" (UID: \"56ae92e0-a5ff-4b66-b471-6e38781e51da\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.759576 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56ae92e0-a5ff-4b66-b471-6e38781e51da-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"56ae92e0-a5ff-4b66-b471-6e38781e51da\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.759645 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56ae92e0-a5ff-4b66-b471-6e38781e51da-logs\") pod \"glance-default-internal-api-0\" (UID: \"56ae92e0-a5ff-4b66-b471-6e38781e51da\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.759709 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"56ae92e0-a5ff-4b66-b471-6e38781e51da\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.759906 4706 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"56ae92e0-a5ff-4b66-b471-6e38781e51da\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.760513 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56ae92e0-a5ff-4b66-b471-6e38781e51da-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"56ae92e0-a5ff-4b66-b471-6e38781e51da\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.760569 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56ae92e0-a5ff-4b66-b471-6e38781e51da-logs\") pod \"glance-default-internal-api-0\" (UID: \"56ae92e0-a5ff-4b66-b471-6e38781e51da\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.769072 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56ae92e0-a5ff-4b66-b471-6e38781e51da-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"56ae92e0-a5ff-4b66-b471-6e38781e51da\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.769209 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56ae92e0-a5ff-4b66-b471-6e38781e51da-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"56ae92e0-a5ff-4b66-b471-6e38781e51da\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.769812 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56ae92e0-a5ff-4b66-b471-6e38781e51da-scripts\") pod \"glance-default-internal-api-0\" (UID: \"56ae92e0-a5ff-4b66-b471-6e38781e51da\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.784768 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2l9s\" (UniqueName: \"kubernetes.io/projected/56ae92e0-a5ff-4b66-b471-6e38781e51da-kube-api-access-h2l9s\") pod \"glance-default-internal-api-0\" (UID: \"56ae92e0-a5ff-4b66-b471-6e38781e51da\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.788193 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56ae92e0-a5ff-4b66-b471-6e38781e51da-config-data\") pod \"glance-default-internal-api-0\" (UID: \"56ae92e0-a5ff-4b66-b471-6e38781e51da\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.803535 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"56ae92e0-a5ff-4b66-b471-6e38781e51da\") " pod="openstack/glance-default-internal-api-0" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.880592 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.965509 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="601bd00e-ad4b-4952-aa81-5dd731ac2ca9" path="/var/lib/kubelet/pods/601bd00e-ad4b-4952-aa81-5dd731ac2ca9/volumes" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.966633 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f" path="/var/lib/kubelet/pods/6fb9e8f3-e03d-40bd-ba5c-8ce7715af21f/volumes" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.968266 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9392449e-c392-4d77-b36a-67b6d8c716c7" path="/var/lib/kubelet/pods/9392449e-c392-4d77-b36a-67b6d8c716c7/volumes" Nov 25 11:57:25 crc kubenswrapper[4706]: I1125 11:57:25.973503 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.095891 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7393-account-create-9cnk4" Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.171584 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnm2c\" (UniqueName: \"kubernetes.io/projected/030673ef-ec79-4f19-8f0e-765d6918cfc4-kube-api-access-fnm2c\") pod \"030673ef-ec79-4f19-8f0e-765d6918cfc4\" (UID: \"030673ef-ec79-4f19-8f0e-765d6918cfc4\") " Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.171941 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/030673ef-ec79-4f19-8f0e-765d6918cfc4-operator-scripts\") pod \"030673ef-ec79-4f19-8f0e-765d6918cfc4\" (UID: \"030673ef-ec79-4f19-8f0e-765d6918cfc4\") " Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.172866 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/030673ef-ec79-4f19-8f0e-765d6918cfc4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "030673ef-ec79-4f19-8f0e-765d6918cfc4" (UID: "030673ef-ec79-4f19-8f0e-765d6918cfc4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.185351 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/030673ef-ec79-4f19-8f0e-765d6918cfc4-kube-api-access-fnm2c" (OuterVolumeSpecName: "kube-api-access-fnm2c") pod "030673ef-ec79-4f19-8f0e-765d6918cfc4" (UID: "030673ef-ec79-4f19-8f0e-765d6918cfc4"). InnerVolumeSpecName "kube-api-access-fnm2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.192481 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c6da-account-create-p9tnk" Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.212115 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-p4np9" Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.257961 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-j8qcn" Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.273409 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqvgw\" (UniqueName: \"kubernetes.io/projected/64c51a3f-220f-4d41-a8ae-996c5d65da6a-kube-api-access-lqvgw\") pod \"64c51a3f-220f-4d41-a8ae-996c5d65da6a\" (UID: \"64c51a3f-220f-4d41-a8ae-996c5d65da6a\") " Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.273501 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64c51a3f-220f-4d41-a8ae-996c5d65da6a-operator-scripts\") pod \"64c51a3f-220f-4d41-a8ae-996c5d65da6a\" (UID: \"64c51a3f-220f-4d41-a8ae-996c5d65da6a\") " Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.273949 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnm2c\" (UniqueName: \"kubernetes.io/projected/030673ef-ec79-4f19-8f0e-765d6918cfc4-kube-api-access-fnm2c\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.273962 4706 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/030673ef-ec79-4f19-8f0e-765d6918cfc4-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.274546 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64c51a3f-220f-4d41-a8ae-996c5d65da6a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "64c51a3f-220f-4d41-a8ae-996c5d65da6a" (UID: "64c51a3f-220f-4d41-a8ae-996c5d65da6a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.303531 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64c51a3f-220f-4d41-a8ae-996c5d65da6a-kube-api-access-lqvgw" (OuterVolumeSpecName: "kube-api-access-lqvgw") pod "64c51a3f-220f-4d41-a8ae-996c5d65da6a" (UID: "64c51a3f-220f-4d41-a8ae-996c5d65da6a"). InnerVolumeSpecName "kube-api-access-lqvgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.375212 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c65sg\" (UniqueName: \"kubernetes.io/projected/5cf51224-9407-44c8-805f-fcf18fa531a3-kube-api-access-c65sg\") pod \"5cf51224-9407-44c8-805f-fcf18fa531a3\" (UID: \"5cf51224-9407-44c8-805f-fcf18fa531a3\") " Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.375278 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsxk9\" (UniqueName: \"kubernetes.io/projected/2b85308a-ef27-494f-9bd3-b06c25118779-kube-api-access-qsxk9\") pod \"2b85308a-ef27-494f-9bd3-b06c25118779\" (UID: \"2b85308a-ef27-494f-9bd3-b06c25118779\") " Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.375354 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5cf51224-9407-44c8-805f-fcf18fa531a3-operator-scripts\") pod \"5cf51224-9407-44c8-805f-fcf18fa531a3\" (UID: \"5cf51224-9407-44c8-805f-fcf18fa531a3\") " Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.375819 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b85308a-ef27-494f-9bd3-b06c25118779-operator-scripts\") pod \"2b85308a-ef27-494f-9bd3-b06c25118779\" (UID: \"2b85308a-ef27-494f-9bd3-b06c25118779\") " Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.376696 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b85308a-ef27-494f-9bd3-b06c25118779-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2b85308a-ef27-494f-9bd3-b06c25118779" (UID: "2b85308a-ef27-494f-9bd3-b06c25118779"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.376696 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cf51224-9407-44c8-805f-fcf18fa531a3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5cf51224-9407-44c8-805f-fcf18fa531a3" (UID: "5cf51224-9407-44c8-805f-fcf18fa531a3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.377212 4706 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5cf51224-9407-44c8-805f-fcf18fa531a3-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.377225 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqvgw\" (UniqueName: \"kubernetes.io/projected/64c51a3f-220f-4d41-a8ae-996c5d65da6a-kube-api-access-lqvgw\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.377235 4706 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64c51a3f-220f-4d41-a8ae-996c5d65da6a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.377243 4706 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b85308a-ef27-494f-9bd3-b06c25118779-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.379839 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cf51224-9407-44c8-805f-fcf18fa531a3-kube-api-access-c65sg" (OuterVolumeSpecName: "kube-api-access-c65sg") pod "5cf51224-9407-44c8-805f-fcf18fa531a3" (UID: "5cf51224-9407-44c8-805f-fcf18fa531a3"). InnerVolumeSpecName "kube-api-access-c65sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.391689 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b85308a-ef27-494f-9bd3-b06c25118779-kube-api-access-qsxk9" (OuterVolumeSpecName: "kube-api-access-qsxk9") pod "2b85308a-ef27-494f-9bd3-b06c25118779" (UID: "2b85308a-ef27-494f-9bd3-b06c25118779"). InnerVolumeSpecName "kube-api-access-qsxk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.479464 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c65sg\" (UniqueName: \"kubernetes.io/projected/5cf51224-9407-44c8-805f-fcf18fa531a3-kube-api-access-c65sg\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.479505 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsxk9\" (UniqueName: \"kubernetes.io/projected/2b85308a-ef27-494f-9bd3-b06c25118779-kube-api-access-qsxk9\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.509672 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d0c5bfae-397f-432d-bdb6-8bb27d43f68c","Type":"ContainerStarted","Data":"58f0e2eadb2fca2b56261abd0a9b6bb73c1e2c01da772e3afc0f3d4e981a974d"} Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.520102 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-j8qcn" event={"ID":"5cf51224-9407-44c8-805f-fcf18fa531a3","Type":"ContainerDied","Data":"51537ed36294d7a025299f7d48f4cc5fd6d4e8a727966005361d81a6bfae99f9"} Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.520147 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51537ed36294d7a025299f7d48f4cc5fd6d4e8a727966005361d81a6bfae99f9" Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.520202 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-j8qcn" Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.531706 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d9dc228-d004-4180-9f22-bebb77ae0fe1","Type":"ContainerStarted","Data":"e61f9e4f89863b20d370bef4b9ab2909b4a65403485828b0c2e104998df4a394"} Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.534107 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c6da-account-create-p9tnk" event={"ID":"64c51a3f-220f-4d41-a8ae-996c5d65da6a","Type":"ContainerDied","Data":"779ea79ee840b49931c0f5a604966317079a9c46ed33881f3a3d930613ba07e4"} Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.534186 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="779ea79ee840b49931c0f5a604966317079a9c46ed33881f3a3d930613ba07e4" Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.534290 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c6da-account-create-p9tnk" Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.542892 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-p4np9" Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.543138 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-p4np9" event={"ID":"2b85308a-ef27-494f-9bd3-b06c25118779","Type":"ContainerDied","Data":"2ba79e75ec3714c05c831a3abd4b4ae371b4a47879fbf4a11e3ebb58503eb047"} Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.543230 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ba79e75ec3714c05c831a3abd4b4ae371b4a47879fbf4a11e3ebb58503eb047" Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.564833 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7393-account-create-9cnk4" Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.565404 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7393-account-create-9cnk4" event={"ID":"030673ef-ec79-4f19-8f0e-765d6918cfc4","Type":"ContainerDied","Data":"aa95a4182f5f76f3031ea00acccdb10f5b8af61a3854cfce9ac1c5c3c9b25c87"} Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.565455 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa95a4182f5f76f3031ea00acccdb10f5b8af61a3854cfce9ac1c5c3c9b25c87" Nov 25 11:57:26 crc kubenswrapper[4706]: I1125 11:57:26.699850 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 11:57:27 crc kubenswrapper[4706]: I1125 11:57:27.035001 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c017-account-create-lsfhl" Nov 25 11:57:27 crc kubenswrapper[4706]: I1125 11:57:27.108404 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acb4725a-1a34-4a3a-b578-7bcf44ff0bef-operator-scripts\") pod \"acb4725a-1a34-4a3a-b578-7bcf44ff0bef\" (UID: \"acb4725a-1a34-4a3a-b578-7bcf44ff0bef\") " Nov 25 11:57:27 crc kubenswrapper[4706]: I1125 11:57:27.108525 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxhft\" (UniqueName: \"kubernetes.io/projected/acb4725a-1a34-4a3a-b578-7bcf44ff0bef-kube-api-access-rxhft\") pod \"acb4725a-1a34-4a3a-b578-7bcf44ff0bef\" (UID: \"acb4725a-1a34-4a3a-b578-7bcf44ff0bef\") " Nov 25 11:57:27 crc kubenswrapper[4706]: I1125 11:57:27.109955 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acb4725a-1a34-4a3a-b578-7bcf44ff0bef-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "acb4725a-1a34-4a3a-b578-7bcf44ff0bef" (UID: "acb4725a-1a34-4a3a-b578-7bcf44ff0bef"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:57:27 crc kubenswrapper[4706]: I1125 11:57:27.114162 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acb4725a-1a34-4a3a-b578-7bcf44ff0bef-kube-api-access-rxhft" (OuterVolumeSpecName: "kube-api-access-rxhft") pod "acb4725a-1a34-4a3a-b578-7bcf44ff0bef" (UID: "acb4725a-1a34-4a3a-b578-7bcf44ff0bef"). InnerVolumeSpecName "kube-api-access-rxhft". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:57:27 crc kubenswrapper[4706]: I1125 11:57:27.210506 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxhft\" (UniqueName: \"kubernetes.io/projected/acb4725a-1a34-4a3a-b578-7bcf44ff0bef-kube-api-access-rxhft\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:27 crc kubenswrapper[4706]: I1125 11:57:27.210540 4706 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acb4725a-1a34-4a3a-b578-7bcf44ff0bef-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:27 crc kubenswrapper[4706]: I1125 11:57:27.636924 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d0c5bfae-397f-432d-bdb6-8bb27d43f68c","Type":"ContainerStarted","Data":"107a6addd618113491736ce8509e9f39334f1140eff334073396d31ed2a44678"} Nov 25 11:57:27 crc kubenswrapper[4706]: I1125 11:57:27.642619 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d9dc228-d004-4180-9f22-bebb77ae0fe1","Type":"ContainerStarted","Data":"e35310652f4053b6fe77dedabf180ea9448383c0e7d03bbed95cfb6ed019be40"} Nov 25 11:57:27 crc kubenswrapper[4706]: I1125 11:57:27.645533 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"56ae92e0-a5ff-4b66-b471-6e38781e51da","Type":"ContainerStarted","Data":"8ee73a99843428c73b7c2d6560ece548bb9f41663b68454aee5de9a92dab9180"} Nov 25 11:57:27 crc kubenswrapper[4706]: I1125 11:57:27.647916 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c017-account-create-lsfhl" event={"ID":"acb4725a-1a34-4a3a-b578-7bcf44ff0bef","Type":"ContainerDied","Data":"48f438a0257671d1ed1ae3ae4dea94b5bfd329e723fdfe5bdb4a38814784e782"} Nov 25 11:57:27 crc kubenswrapper[4706]: I1125 11:57:27.647948 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48f438a0257671d1ed1ae3ae4dea94b5bfd329e723fdfe5bdb4a38814784e782" Nov 25 11:57:27 crc kubenswrapper[4706]: I1125 11:57:27.648005 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c017-account-create-lsfhl" Nov 25 11:57:28 crc kubenswrapper[4706]: I1125 11:57:28.657562 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d9dc228-d004-4180-9f22-bebb77ae0fe1","Type":"ContainerStarted","Data":"ef9769588038c1c936d63816d99652f94f1c783c65da928f66429e8d6299b492"} Nov 25 11:57:28 crc kubenswrapper[4706]: I1125 11:57:28.658039 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d9dc228-d004-4180-9f22-bebb77ae0fe1","Type":"ContainerStarted","Data":"0a56068717bcf1cde4e34a2e1ea54dc27da266858a5aaa304ef5590be68c322e"} Nov 25 11:57:28 crc kubenswrapper[4706]: I1125 11:57:28.661065 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"56ae92e0-a5ff-4b66-b471-6e38781e51da","Type":"ContainerStarted","Data":"f9f8cb07f3f526b585f61fe308f0e9ad82b896a234806cd243f2c465924126ac"} Nov 25 11:57:28 crc kubenswrapper[4706]: I1125 11:57:28.661110 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"56ae92e0-a5ff-4b66-b471-6e38781e51da","Type":"ContainerStarted","Data":"ef6a4b3dac87aa4bc4107dd1315589c4b1d59f31e1c83794996e6a66b94be750"} Nov 25 11:57:28 crc kubenswrapper[4706]: I1125 11:57:28.663971 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d0c5bfae-397f-432d-bdb6-8bb27d43f68c","Type":"ContainerStarted","Data":"6a072bb32197494aac5ebc96196a0dbfa3152d6fb8fdcce29137ccb101bba4f1"} Nov 25 11:57:28 crc kubenswrapper[4706]: I1125 11:57:28.682486 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.682466816 podStartE2EDuration="3.682466816s" podCreationTimestamp="2025-11-25 11:57:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:57:28.677244254 +0000 UTC m=+1257.591801645" watchObservedRunningTime="2025-11-25 11:57:28.682466816 +0000 UTC m=+1257.597024197" Nov 25 11:57:28 crc kubenswrapper[4706]: I1125 11:57:28.704246 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.704225465 podStartE2EDuration="4.704225465s" podCreationTimestamp="2025-11-25 11:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:57:28.701332312 +0000 UTC m=+1257.615889693" watchObservedRunningTime="2025-11-25 11:57:28.704225465 +0000 UTC m=+1257.618782846" Nov 25 11:57:30 crc kubenswrapper[4706]: I1125 11:57:30.732550 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d9dc228-d004-4180-9f22-bebb77ae0fe1","Type":"ContainerStarted","Data":"e48933db382906082f980537a9a6f0f49ffdb4ada5626134724344f294d24d0d"} Nov 25 11:57:30 crc kubenswrapper[4706]: I1125 11:57:30.733420 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 11:57:30 crc kubenswrapper[4706]: I1125 11:57:30.772273 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.444941296 podStartE2EDuration="6.7722459s" podCreationTimestamp="2025-11-25 11:57:24 +0000 UTC" firstStartedPulling="2025-11-25 11:57:25.611586928 +0000 UTC m=+1254.526144309" lastFinishedPulling="2025-11-25 11:57:29.938891512 +0000 UTC m=+1258.853448913" observedRunningTime="2025-11-25 11:57:30.764219458 +0000 UTC m=+1259.678776849" watchObservedRunningTime="2025-11-25 11:57:30.7722459 +0000 UTC m=+1259.686803281" Nov 25 11:57:30 crc kubenswrapper[4706]: I1125 11:57:30.989997 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:57:32 crc kubenswrapper[4706]: I1125 11:57:32.114602 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zbtll"] Nov 25 11:57:32 crc kubenswrapper[4706]: E1125 11:57:32.115044 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acb4725a-1a34-4a3a-b578-7bcf44ff0bef" containerName="mariadb-account-create" Nov 25 11:57:32 crc kubenswrapper[4706]: I1125 11:57:32.115059 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="acb4725a-1a34-4a3a-b578-7bcf44ff0bef" containerName="mariadb-account-create" Nov 25 11:57:32 crc kubenswrapper[4706]: E1125 11:57:32.115091 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64c51a3f-220f-4d41-a8ae-996c5d65da6a" containerName="mariadb-account-create" Nov 25 11:57:32 crc kubenswrapper[4706]: I1125 11:57:32.115100 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="64c51a3f-220f-4d41-a8ae-996c5d65da6a" containerName="mariadb-account-create" Nov 25 11:57:32 crc kubenswrapper[4706]: E1125 11:57:32.115109 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cf51224-9407-44c8-805f-fcf18fa531a3" containerName="mariadb-database-create" Nov 25 11:57:32 crc kubenswrapper[4706]: I1125 11:57:32.115117 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cf51224-9407-44c8-805f-fcf18fa531a3" containerName="mariadb-database-create" Nov 25 11:57:32 crc kubenswrapper[4706]: E1125 11:57:32.115128 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="030673ef-ec79-4f19-8f0e-765d6918cfc4" containerName="mariadb-account-create" Nov 25 11:57:32 crc kubenswrapper[4706]: I1125 11:57:32.115135 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="030673ef-ec79-4f19-8f0e-765d6918cfc4" containerName="mariadb-account-create" Nov 25 11:57:32 crc kubenswrapper[4706]: E1125 11:57:32.115153 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b85308a-ef27-494f-9bd3-b06c25118779" containerName="mariadb-database-create" Nov 25 11:57:32 crc kubenswrapper[4706]: I1125 11:57:32.115160 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b85308a-ef27-494f-9bd3-b06c25118779" containerName="mariadb-database-create" Nov 25 11:57:32 crc kubenswrapper[4706]: I1125 11:57:32.115436 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cf51224-9407-44c8-805f-fcf18fa531a3" containerName="mariadb-database-create" Nov 25 11:57:32 crc kubenswrapper[4706]: I1125 11:57:32.115456 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="acb4725a-1a34-4a3a-b578-7bcf44ff0bef" containerName="mariadb-account-create" Nov 25 11:57:32 crc kubenswrapper[4706]: I1125 11:57:32.115471 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="64c51a3f-220f-4d41-a8ae-996c5d65da6a" containerName="mariadb-account-create" Nov 25 11:57:32 crc kubenswrapper[4706]: I1125 11:57:32.115497 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="030673ef-ec79-4f19-8f0e-765d6918cfc4" containerName="mariadb-account-create" Nov 25 11:57:32 crc kubenswrapper[4706]: I1125 11:57:32.115508 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b85308a-ef27-494f-9bd3-b06c25118779" containerName="mariadb-database-create" Nov 25 11:57:32 crc kubenswrapper[4706]: I1125 11:57:32.116225 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zbtll" Nov 25 11:57:32 crc kubenswrapper[4706]: I1125 11:57:32.119097 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 25 11:57:32 crc kubenswrapper[4706]: I1125 11:57:32.122148 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zbtll"] Nov 25 11:57:32 crc kubenswrapper[4706]: I1125 11:57:32.123025 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 25 11:57:32 crc kubenswrapper[4706]: I1125 11:57:32.123273 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-dxkkg" Nov 25 11:57:32 crc kubenswrapper[4706]: I1125 11:57:32.218284 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/560816f0-4040-43a0-8a73-84500a0aad9c-scripts\") pod \"nova-cell0-conductor-db-sync-zbtll\" (UID: \"560816f0-4040-43a0-8a73-84500a0aad9c\") " pod="openstack/nova-cell0-conductor-db-sync-zbtll" Nov 25 11:57:32 crc kubenswrapper[4706]: I1125 11:57:32.218494 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w75bv\" (UniqueName: \"kubernetes.io/projected/560816f0-4040-43a0-8a73-84500a0aad9c-kube-api-access-w75bv\") pod \"nova-cell0-conductor-db-sync-zbtll\" (UID: \"560816f0-4040-43a0-8a73-84500a0aad9c\") " pod="openstack/nova-cell0-conductor-db-sync-zbtll" Nov 25 11:57:32 crc kubenswrapper[4706]: I1125 11:57:32.218541 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/560816f0-4040-43a0-8a73-84500a0aad9c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zbtll\" (UID: \"560816f0-4040-43a0-8a73-84500a0aad9c\") " pod="openstack/nova-cell0-conductor-db-sync-zbtll" Nov 25 11:57:32 crc kubenswrapper[4706]: I1125 11:57:32.218607 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/560816f0-4040-43a0-8a73-84500a0aad9c-config-data\") pod \"nova-cell0-conductor-db-sync-zbtll\" (UID: \"560816f0-4040-43a0-8a73-84500a0aad9c\") " pod="openstack/nova-cell0-conductor-db-sync-zbtll" Nov 25 11:57:32 crc kubenswrapper[4706]: I1125 11:57:32.320707 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w75bv\" (UniqueName: \"kubernetes.io/projected/560816f0-4040-43a0-8a73-84500a0aad9c-kube-api-access-w75bv\") pod \"nova-cell0-conductor-db-sync-zbtll\" (UID: \"560816f0-4040-43a0-8a73-84500a0aad9c\") " pod="openstack/nova-cell0-conductor-db-sync-zbtll" Nov 25 11:57:32 crc kubenswrapper[4706]: I1125 11:57:32.320799 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/560816f0-4040-43a0-8a73-84500a0aad9c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zbtll\" (UID: \"560816f0-4040-43a0-8a73-84500a0aad9c\") " pod="openstack/nova-cell0-conductor-db-sync-zbtll" Nov 25 11:57:32 crc kubenswrapper[4706]: I1125 11:57:32.320844 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/560816f0-4040-43a0-8a73-84500a0aad9c-config-data\") pod \"nova-cell0-conductor-db-sync-zbtll\" (UID: \"560816f0-4040-43a0-8a73-84500a0aad9c\") " pod="openstack/nova-cell0-conductor-db-sync-zbtll" Nov 25 11:57:32 crc kubenswrapper[4706]: I1125 11:57:32.320937 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/560816f0-4040-43a0-8a73-84500a0aad9c-scripts\") pod \"nova-cell0-conductor-db-sync-zbtll\" (UID: \"560816f0-4040-43a0-8a73-84500a0aad9c\") " pod="openstack/nova-cell0-conductor-db-sync-zbtll" Nov 25 11:57:32 crc kubenswrapper[4706]: I1125 11:57:32.328092 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/560816f0-4040-43a0-8a73-84500a0aad9c-scripts\") pod \"nova-cell0-conductor-db-sync-zbtll\" (UID: \"560816f0-4040-43a0-8a73-84500a0aad9c\") " pod="openstack/nova-cell0-conductor-db-sync-zbtll" Nov 25 11:57:32 crc kubenswrapper[4706]: I1125 11:57:32.328190 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/560816f0-4040-43a0-8a73-84500a0aad9c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zbtll\" (UID: \"560816f0-4040-43a0-8a73-84500a0aad9c\") " pod="openstack/nova-cell0-conductor-db-sync-zbtll" Nov 25 11:57:32 crc kubenswrapper[4706]: I1125 11:57:32.328217 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/560816f0-4040-43a0-8a73-84500a0aad9c-config-data\") pod \"nova-cell0-conductor-db-sync-zbtll\" (UID: \"560816f0-4040-43a0-8a73-84500a0aad9c\") " pod="openstack/nova-cell0-conductor-db-sync-zbtll" Nov 25 11:57:32 crc kubenswrapper[4706]: I1125 11:57:32.342438 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w75bv\" (UniqueName: \"kubernetes.io/projected/560816f0-4040-43a0-8a73-84500a0aad9c-kube-api-access-w75bv\") pod \"nova-cell0-conductor-db-sync-zbtll\" (UID: \"560816f0-4040-43a0-8a73-84500a0aad9c\") " pod="openstack/nova-cell0-conductor-db-sync-zbtll" Nov 25 11:57:32 crc kubenswrapper[4706]: I1125 11:57:32.438409 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zbtll" Nov 25 11:57:32 crc kubenswrapper[4706]: I1125 11:57:32.755264 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8d9dc228-d004-4180-9f22-bebb77ae0fe1" containerName="ceilometer-central-agent" containerID="cri-o://e35310652f4053b6fe77dedabf180ea9448383c0e7d03bbed95cfb6ed019be40" gracePeriod=30 Nov 25 11:57:32 crc kubenswrapper[4706]: I1125 11:57:32.756644 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8d9dc228-d004-4180-9f22-bebb77ae0fe1" containerName="proxy-httpd" containerID="cri-o://e48933db382906082f980537a9a6f0f49ffdb4ada5626134724344f294d24d0d" gracePeriod=30 Nov 25 11:57:32 crc kubenswrapper[4706]: I1125 11:57:32.756813 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8d9dc228-d004-4180-9f22-bebb77ae0fe1" containerName="sg-core" containerID="cri-o://ef9769588038c1c936d63816d99652f94f1c783c65da928f66429e8d6299b492" gracePeriod=30 Nov 25 11:57:32 crc kubenswrapper[4706]: I1125 11:57:32.756941 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8d9dc228-d004-4180-9f22-bebb77ae0fe1" containerName="ceilometer-notification-agent" containerID="cri-o://0a56068717bcf1cde4e34a2e1ea54dc27da266858a5aaa304ef5590be68c322e" gracePeriod=30 Nov 25 11:57:32 crc kubenswrapper[4706]: I1125 11:57:32.989625 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zbtll"] Nov 25 11:57:32 crc kubenswrapper[4706]: W1125 11:57:32.992661 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod560816f0_4040_43a0_8a73_84500a0aad9c.slice/crio-7037619fd9b8c4df32b299c4499e01a5c68f141eedd99724d6dc413e5c35c1d8 WatchSource:0}: Error finding container 7037619fd9b8c4df32b299c4499e01a5c68f141eedd99724d6dc413e5c35c1d8: Status 404 returned error can't find the container with id 7037619fd9b8c4df32b299c4499e01a5c68f141eedd99724d6dc413e5c35c1d8 Nov 25 11:57:33 crc kubenswrapper[4706]: I1125 11:57:33.770336 4706 generic.go:334] "Generic (PLEG): container finished" podID="8d9dc228-d004-4180-9f22-bebb77ae0fe1" containerID="e48933db382906082f980537a9a6f0f49ffdb4ada5626134724344f294d24d0d" exitCode=0 Nov 25 11:57:33 crc kubenswrapper[4706]: I1125 11:57:33.770698 4706 generic.go:334] "Generic (PLEG): container finished" podID="8d9dc228-d004-4180-9f22-bebb77ae0fe1" containerID="ef9769588038c1c936d63816d99652f94f1c783c65da928f66429e8d6299b492" exitCode=2 Nov 25 11:57:33 crc kubenswrapper[4706]: I1125 11:57:33.770713 4706 generic.go:334] "Generic (PLEG): container finished" podID="8d9dc228-d004-4180-9f22-bebb77ae0fe1" containerID="0a56068717bcf1cde4e34a2e1ea54dc27da266858a5aaa304ef5590be68c322e" exitCode=0 Nov 25 11:57:33 crc kubenswrapper[4706]: I1125 11:57:33.770436 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d9dc228-d004-4180-9f22-bebb77ae0fe1","Type":"ContainerDied","Data":"e48933db382906082f980537a9a6f0f49ffdb4ada5626134724344f294d24d0d"} Nov 25 11:57:33 crc kubenswrapper[4706]: I1125 11:57:33.770824 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d9dc228-d004-4180-9f22-bebb77ae0fe1","Type":"ContainerDied","Data":"ef9769588038c1c936d63816d99652f94f1c783c65da928f66429e8d6299b492"} Nov 25 11:57:33 crc kubenswrapper[4706]: I1125 11:57:33.770841 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d9dc228-d004-4180-9f22-bebb77ae0fe1","Type":"ContainerDied","Data":"0a56068717bcf1cde4e34a2e1ea54dc27da266858a5aaa304ef5590be68c322e"} Nov 25 11:57:33 crc kubenswrapper[4706]: I1125 11:57:33.774442 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zbtll" event={"ID":"560816f0-4040-43a0-8a73-84500a0aad9c","Type":"ContainerStarted","Data":"7037619fd9b8c4df32b299c4499e01a5c68f141eedd99724d6dc413e5c35c1d8"} Nov 25 11:57:35 crc kubenswrapper[4706]: I1125 11:57:35.145217 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 25 11:57:35 crc kubenswrapper[4706]: I1125 11:57:35.145551 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 25 11:57:35 crc kubenswrapper[4706]: I1125 11:57:35.182812 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 25 11:57:35 crc kubenswrapper[4706]: I1125 11:57:35.200143 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 25 11:57:35 crc kubenswrapper[4706]: I1125 11:57:35.804403 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 11:57:35 crc kubenswrapper[4706]: I1125 11:57:35.804490 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 11:57:35 crc kubenswrapper[4706]: I1125 11:57:35.881380 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 25 11:57:35 crc kubenswrapper[4706]: I1125 11:57:35.881438 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 25 11:57:35 crc kubenswrapper[4706]: I1125 11:57:35.931491 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 25 11:57:35 crc kubenswrapper[4706]: I1125 11:57:35.940451 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 25 11:57:36 crc kubenswrapper[4706]: I1125 11:57:36.815141 4706 generic.go:334] "Generic (PLEG): container finished" podID="8d9dc228-d004-4180-9f22-bebb77ae0fe1" containerID="e35310652f4053b6fe77dedabf180ea9448383c0e7d03bbed95cfb6ed019be40" exitCode=0 Nov 25 11:57:36 crc kubenswrapper[4706]: I1125 11:57:36.815340 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d9dc228-d004-4180-9f22-bebb77ae0fe1","Type":"ContainerDied","Data":"e35310652f4053b6fe77dedabf180ea9448383c0e7d03bbed95cfb6ed019be40"} Nov 25 11:57:36 crc kubenswrapper[4706]: I1125 11:57:36.815982 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 25 11:57:36 crc kubenswrapper[4706]: I1125 11:57:36.816374 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 25 11:57:37 crc kubenswrapper[4706]: I1125 11:57:37.948806 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 25 11:57:37 crc kubenswrapper[4706]: I1125 11:57:37.949206 4706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 11:57:37 crc kubenswrapper[4706]: I1125 11:57:37.952455 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 25 11:57:38 crc kubenswrapper[4706]: I1125 11:57:38.831604 4706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 11:57:38 crc kubenswrapper[4706]: I1125 11:57:38.831886 4706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 11:57:38 crc kubenswrapper[4706]: I1125 11:57:38.968535 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 25 11:57:38 crc kubenswrapper[4706]: I1125 11:57:38.970567 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.486650 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.620629 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d9dc228-d004-4180-9f22-bebb77ae0fe1-log-httpd\") pod \"8d9dc228-d004-4180-9f22-bebb77ae0fe1\" (UID: \"8d9dc228-d004-4180-9f22-bebb77ae0fe1\") " Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.620700 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzgxs\" (UniqueName: \"kubernetes.io/projected/8d9dc228-d004-4180-9f22-bebb77ae0fe1-kube-api-access-gzgxs\") pod \"8d9dc228-d004-4180-9f22-bebb77ae0fe1\" (UID: \"8d9dc228-d004-4180-9f22-bebb77ae0fe1\") " Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.620721 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d9dc228-d004-4180-9f22-bebb77ae0fe1-scripts\") pod \"8d9dc228-d004-4180-9f22-bebb77ae0fe1\" (UID: \"8d9dc228-d004-4180-9f22-bebb77ae0fe1\") " Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.620816 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d9dc228-d004-4180-9f22-bebb77ae0fe1-sg-core-conf-yaml\") pod \"8d9dc228-d004-4180-9f22-bebb77ae0fe1\" (UID: \"8d9dc228-d004-4180-9f22-bebb77ae0fe1\") " Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.620833 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d9dc228-d004-4180-9f22-bebb77ae0fe1-config-data\") pod \"8d9dc228-d004-4180-9f22-bebb77ae0fe1\" (UID: \"8d9dc228-d004-4180-9f22-bebb77ae0fe1\") " Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.620865 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d9dc228-d004-4180-9f22-bebb77ae0fe1-run-httpd\") pod \"8d9dc228-d004-4180-9f22-bebb77ae0fe1\" (UID: \"8d9dc228-d004-4180-9f22-bebb77ae0fe1\") " Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.620933 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d9dc228-d004-4180-9f22-bebb77ae0fe1-combined-ca-bundle\") pod \"8d9dc228-d004-4180-9f22-bebb77ae0fe1\" (UID: \"8d9dc228-d004-4180-9f22-bebb77ae0fe1\") " Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.621451 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d9dc228-d004-4180-9f22-bebb77ae0fe1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8d9dc228-d004-4180-9f22-bebb77ae0fe1" (UID: "8d9dc228-d004-4180-9f22-bebb77ae0fe1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.622153 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d9dc228-d004-4180-9f22-bebb77ae0fe1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8d9dc228-d004-4180-9f22-bebb77ae0fe1" (UID: "8d9dc228-d004-4180-9f22-bebb77ae0fe1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.625715 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d9dc228-d004-4180-9f22-bebb77ae0fe1-scripts" (OuterVolumeSpecName: "scripts") pod "8d9dc228-d004-4180-9f22-bebb77ae0fe1" (UID: "8d9dc228-d004-4180-9f22-bebb77ae0fe1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.626487 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d9dc228-d004-4180-9f22-bebb77ae0fe1-kube-api-access-gzgxs" (OuterVolumeSpecName: "kube-api-access-gzgxs") pod "8d9dc228-d004-4180-9f22-bebb77ae0fe1" (UID: "8d9dc228-d004-4180-9f22-bebb77ae0fe1"). InnerVolumeSpecName "kube-api-access-gzgxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.654910 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d9dc228-d004-4180-9f22-bebb77ae0fe1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8d9dc228-d004-4180-9f22-bebb77ae0fe1" (UID: "8d9dc228-d004-4180-9f22-bebb77ae0fe1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.689996 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d9dc228-d004-4180-9f22-bebb77ae0fe1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d9dc228-d004-4180-9f22-bebb77ae0fe1" (UID: "8d9dc228-d004-4180-9f22-bebb77ae0fe1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.710250 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d9dc228-d004-4180-9f22-bebb77ae0fe1-config-data" (OuterVolumeSpecName: "config-data") pod "8d9dc228-d004-4180-9f22-bebb77ae0fe1" (UID: "8d9dc228-d004-4180-9f22-bebb77ae0fe1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.724177 4706 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d9dc228-d004-4180-9f22-bebb77ae0fe1-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.724213 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzgxs\" (UniqueName: \"kubernetes.io/projected/8d9dc228-d004-4180-9f22-bebb77ae0fe1-kube-api-access-gzgxs\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.724225 4706 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d9dc228-d004-4180-9f22-bebb77ae0fe1-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.724236 4706 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d9dc228-d004-4180-9f22-bebb77ae0fe1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.724246 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d9dc228-d004-4180-9f22-bebb77ae0fe1-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.724256 4706 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d9dc228-d004-4180-9f22-bebb77ae0fe1-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.724268 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d9dc228-d004-4180-9f22-bebb77ae0fe1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.858885 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d9dc228-d004-4180-9f22-bebb77ae0fe1","Type":"ContainerDied","Data":"e61f9e4f89863b20d370bef4b9ab2909b4a65403485828b0c2e104998df4a394"} Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.858956 4706 scope.go:117] "RemoveContainer" containerID="e48933db382906082f980537a9a6f0f49ffdb4ada5626134724344f294d24d0d" Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.858951 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.861109 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zbtll" event={"ID":"560816f0-4040-43a0-8a73-84500a0aad9c","Type":"ContainerStarted","Data":"3d698239778f79ff43be39ff91d4e11623e9e17b73d56d1ddfdf78cc933d6ca5"} Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.883030 4706 scope.go:117] "RemoveContainer" containerID="ef9769588038c1c936d63816d99652f94f1c783c65da928f66429e8d6299b492" Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.887280 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-zbtll" podStartSLOduration=1.6147944600000002 podStartE2EDuration="9.887259743s" podCreationTimestamp="2025-11-25 11:57:32 +0000 UTC" firstStartedPulling="2025-11-25 11:57:32.994594309 +0000 UTC m=+1261.909151690" lastFinishedPulling="2025-11-25 11:57:41.267059592 +0000 UTC m=+1270.181616973" observedRunningTime="2025-11-25 11:57:41.881714414 +0000 UTC m=+1270.796271795" watchObservedRunningTime="2025-11-25 11:57:41.887259743 +0000 UTC m=+1270.801817124" Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.909756 4706 scope.go:117] "RemoveContainer" containerID="0a56068717bcf1cde4e34a2e1ea54dc27da266858a5aaa304ef5590be68c322e" Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.915588 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.931159 4706 scope.go:117] "RemoveContainer" containerID="e35310652f4053b6fe77dedabf180ea9448383c0e7d03bbed95cfb6ed019be40" Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.949076 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.949123 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:57:41 crc kubenswrapper[4706]: E1125 11:57:41.961480 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d9dc228-d004-4180-9f22-bebb77ae0fe1" containerName="ceilometer-central-agent" Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.961522 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9dc228-d004-4180-9f22-bebb77ae0fe1" containerName="ceilometer-central-agent" Nov 25 11:57:41 crc kubenswrapper[4706]: E1125 11:57:41.961543 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d9dc228-d004-4180-9f22-bebb77ae0fe1" containerName="sg-core" Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.961554 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9dc228-d004-4180-9f22-bebb77ae0fe1" containerName="sg-core" Nov 25 11:57:41 crc kubenswrapper[4706]: E1125 11:57:41.961562 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d9dc228-d004-4180-9f22-bebb77ae0fe1" containerName="proxy-httpd" Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.961568 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9dc228-d004-4180-9f22-bebb77ae0fe1" containerName="proxy-httpd" Nov 25 11:57:41 crc kubenswrapper[4706]: E1125 11:57:41.961588 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d9dc228-d004-4180-9f22-bebb77ae0fe1" containerName="ceilometer-notification-agent" Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.961595 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9dc228-d004-4180-9f22-bebb77ae0fe1" containerName="ceilometer-notification-agent" Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.961896 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d9dc228-d004-4180-9f22-bebb77ae0fe1" containerName="ceilometer-notification-agent" Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.961912 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d9dc228-d004-4180-9f22-bebb77ae0fe1" containerName="ceilometer-central-agent" Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.961923 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d9dc228-d004-4180-9f22-bebb77ae0fe1" containerName="sg-core" Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.961937 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d9dc228-d004-4180-9f22-bebb77ae0fe1" containerName="proxy-httpd" Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.965865 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.965959 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.968468 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 11:57:41 crc kubenswrapper[4706]: I1125 11:57:41.968576 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 11:57:42 crc kubenswrapper[4706]: I1125 11:57:42.129989 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-scripts\") pod \"ceilometer-0\" (UID: \"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f\") " pod="openstack/ceilometer-0" Nov 25 11:57:42 crc kubenswrapper[4706]: I1125 11:57:42.130046 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f\") " pod="openstack/ceilometer-0" Nov 25 11:57:42 crc kubenswrapper[4706]: I1125 11:57:42.130079 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-run-httpd\") pod \"ceilometer-0\" (UID: \"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f\") " pod="openstack/ceilometer-0" Nov 25 11:57:42 crc kubenswrapper[4706]: I1125 11:57:42.130712 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-log-httpd\") pod \"ceilometer-0\" (UID: \"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f\") " pod="openstack/ceilometer-0" Nov 25 11:57:42 crc kubenswrapper[4706]: I1125 11:57:42.130782 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f\") " pod="openstack/ceilometer-0" Nov 25 11:57:42 crc kubenswrapper[4706]: I1125 11:57:42.130856 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6v6m\" (UniqueName: \"kubernetes.io/projected/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-kube-api-access-g6v6m\") pod \"ceilometer-0\" (UID: \"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f\") " pod="openstack/ceilometer-0" Nov 25 11:57:42 crc kubenswrapper[4706]: I1125 11:57:42.130917 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-config-data\") pod \"ceilometer-0\" (UID: \"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f\") " pod="openstack/ceilometer-0" Nov 25 11:57:42 crc kubenswrapper[4706]: I1125 11:57:42.232467 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f\") " pod="openstack/ceilometer-0" Nov 25 11:57:42 crc kubenswrapper[4706]: I1125 11:57:42.232558 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6v6m\" (UniqueName: \"kubernetes.io/projected/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-kube-api-access-g6v6m\") pod \"ceilometer-0\" (UID: \"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f\") " pod="openstack/ceilometer-0" Nov 25 11:57:42 crc kubenswrapper[4706]: I1125 11:57:42.232613 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-config-data\") pod \"ceilometer-0\" (UID: \"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f\") " pod="openstack/ceilometer-0" Nov 25 11:57:42 crc kubenswrapper[4706]: I1125 11:57:42.232645 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-scripts\") pod \"ceilometer-0\" (UID: \"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f\") " pod="openstack/ceilometer-0" Nov 25 11:57:42 crc kubenswrapper[4706]: I1125 11:57:42.232672 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f\") " pod="openstack/ceilometer-0" Nov 25 11:57:42 crc kubenswrapper[4706]: I1125 11:57:42.232702 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-run-httpd\") pod \"ceilometer-0\" (UID: \"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f\") " pod="openstack/ceilometer-0" Nov 25 11:57:42 crc kubenswrapper[4706]: I1125 11:57:42.232788 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-log-httpd\") pod \"ceilometer-0\" (UID: \"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f\") " pod="openstack/ceilometer-0" Nov 25 11:57:42 crc kubenswrapper[4706]: I1125 11:57:42.233275 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-log-httpd\") pod \"ceilometer-0\" (UID: \"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f\") " pod="openstack/ceilometer-0" Nov 25 11:57:42 crc kubenswrapper[4706]: I1125 11:57:42.234699 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-run-httpd\") pod \"ceilometer-0\" (UID: \"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f\") " pod="openstack/ceilometer-0" Nov 25 11:57:42 crc kubenswrapper[4706]: I1125 11:57:42.235678 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f\") " pod="openstack/ceilometer-0" Nov 25 11:57:42 crc kubenswrapper[4706]: I1125 11:57:42.236793 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-scripts\") pod \"ceilometer-0\" (UID: \"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f\") " pod="openstack/ceilometer-0" Nov 25 11:57:42 crc kubenswrapper[4706]: I1125 11:57:42.237852 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-config-data\") pod \"ceilometer-0\" (UID: \"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f\") " pod="openstack/ceilometer-0" Nov 25 11:57:42 crc kubenswrapper[4706]: I1125 11:57:42.239558 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f\") " pod="openstack/ceilometer-0" Nov 25 11:57:42 crc kubenswrapper[4706]: I1125 11:57:42.253643 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6v6m\" (UniqueName: \"kubernetes.io/projected/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-kube-api-access-g6v6m\") pod \"ceilometer-0\" (UID: \"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f\") " pod="openstack/ceilometer-0" Nov 25 11:57:42 crc kubenswrapper[4706]: I1125 11:57:42.284139 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 11:57:42 crc kubenswrapper[4706]: I1125 11:57:42.721292 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:57:42 crc kubenswrapper[4706]: W1125 11:57:42.723092 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3df2b1f2_0fee_454e_a77d_8ae5ce76ed9f.slice/crio-c84d267a41fa7548dfe22dc46bedeb33d5ad0d840a3bfa29fed7b6a6cbcd2523 WatchSource:0}: Error finding container c84d267a41fa7548dfe22dc46bedeb33d5ad0d840a3bfa29fed7b6a6cbcd2523: Status 404 returned error can't find the container with id c84d267a41fa7548dfe22dc46bedeb33d5ad0d840a3bfa29fed7b6a6cbcd2523 Nov 25 11:57:42 crc kubenswrapper[4706]: I1125 11:57:42.872874 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f","Type":"ContainerStarted","Data":"c84d267a41fa7548dfe22dc46bedeb33d5ad0d840a3bfa29fed7b6a6cbcd2523"} Nov 25 11:57:43 crc kubenswrapper[4706]: I1125 11:57:43.905292 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f","Type":"ContainerStarted","Data":"f3f8cd889caa95db731df251888a7c1a3ce9d080796aa96191596b79dd853b9b"} Nov 25 11:57:43 crc kubenswrapper[4706]: I1125 11:57:43.934191 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d9dc228-d004-4180-9f22-bebb77ae0fe1" path="/var/lib/kubelet/pods/8d9dc228-d004-4180-9f22-bebb77ae0fe1/volumes" Nov 25 11:57:44 crc kubenswrapper[4706]: I1125 11:57:44.916706 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f","Type":"ContainerStarted","Data":"9d0124bcc1ee48b4329bb8703782a460504d628f4b5406382971aded6556e60a"} Nov 25 11:57:45 crc kubenswrapper[4706]: I1125 11:57:45.944970 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f","Type":"ContainerStarted","Data":"53ab2df770b270d546ef9e435e3a0f4ec580df8b785873c38f798f12f2668394"} Nov 25 11:57:46 crc kubenswrapper[4706]: I1125 11:57:46.941729 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f","Type":"ContainerStarted","Data":"67864c33547591b87be529165564a21dc3207d413ee9736f09fce07b61e0f127"} Nov 25 11:57:46 crc kubenswrapper[4706]: I1125 11:57:46.942346 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 11:57:46 crc kubenswrapper[4706]: I1125 11:57:46.979109 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.617562433 podStartE2EDuration="5.979087771s" podCreationTimestamp="2025-11-25 11:57:41 +0000 UTC" firstStartedPulling="2025-11-25 11:57:42.725480414 +0000 UTC m=+1271.640037795" lastFinishedPulling="2025-11-25 11:57:46.087005762 +0000 UTC m=+1275.001563133" observedRunningTime="2025-11-25 11:57:46.974175947 +0000 UTC m=+1275.888733328" watchObservedRunningTime="2025-11-25 11:57:46.979087771 +0000 UTC m=+1275.893645152" Nov 25 11:57:55 crc kubenswrapper[4706]: I1125 11:57:55.027994 4706 generic.go:334] "Generic (PLEG): container finished" podID="560816f0-4040-43a0-8a73-84500a0aad9c" containerID="3d698239778f79ff43be39ff91d4e11623e9e17b73d56d1ddfdf78cc933d6ca5" exitCode=0 Nov 25 11:57:55 crc kubenswrapper[4706]: I1125 11:57:55.028127 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zbtll" event={"ID":"560816f0-4040-43a0-8a73-84500a0aad9c","Type":"ContainerDied","Data":"3d698239778f79ff43be39ff91d4e11623e9e17b73d56d1ddfdf78cc933d6ca5"} Nov 25 11:57:56 crc kubenswrapper[4706]: I1125 11:57:56.403290 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zbtll" Nov 25 11:57:56 crc kubenswrapper[4706]: I1125 11:57:56.528988 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/560816f0-4040-43a0-8a73-84500a0aad9c-config-data\") pod \"560816f0-4040-43a0-8a73-84500a0aad9c\" (UID: \"560816f0-4040-43a0-8a73-84500a0aad9c\") " Nov 25 11:57:56 crc kubenswrapper[4706]: I1125 11:57:56.529196 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w75bv\" (UniqueName: \"kubernetes.io/projected/560816f0-4040-43a0-8a73-84500a0aad9c-kube-api-access-w75bv\") pod \"560816f0-4040-43a0-8a73-84500a0aad9c\" (UID: \"560816f0-4040-43a0-8a73-84500a0aad9c\") " Nov 25 11:57:56 crc kubenswrapper[4706]: I1125 11:57:56.529896 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/560816f0-4040-43a0-8a73-84500a0aad9c-scripts\") pod \"560816f0-4040-43a0-8a73-84500a0aad9c\" (UID: \"560816f0-4040-43a0-8a73-84500a0aad9c\") " Nov 25 11:57:56 crc kubenswrapper[4706]: I1125 11:57:56.529955 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/560816f0-4040-43a0-8a73-84500a0aad9c-combined-ca-bundle\") pod \"560816f0-4040-43a0-8a73-84500a0aad9c\" (UID: \"560816f0-4040-43a0-8a73-84500a0aad9c\") " Nov 25 11:57:56 crc kubenswrapper[4706]: I1125 11:57:56.535242 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/560816f0-4040-43a0-8a73-84500a0aad9c-kube-api-access-w75bv" (OuterVolumeSpecName: "kube-api-access-w75bv") pod "560816f0-4040-43a0-8a73-84500a0aad9c" (UID: "560816f0-4040-43a0-8a73-84500a0aad9c"). InnerVolumeSpecName "kube-api-access-w75bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:57:56 crc kubenswrapper[4706]: I1125 11:57:56.536414 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/560816f0-4040-43a0-8a73-84500a0aad9c-scripts" (OuterVolumeSpecName: "scripts") pod "560816f0-4040-43a0-8a73-84500a0aad9c" (UID: "560816f0-4040-43a0-8a73-84500a0aad9c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:57:56 crc kubenswrapper[4706]: I1125 11:57:56.555855 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/560816f0-4040-43a0-8a73-84500a0aad9c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "560816f0-4040-43a0-8a73-84500a0aad9c" (UID: "560816f0-4040-43a0-8a73-84500a0aad9c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:57:56 crc kubenswrapper[4706]: I1125 11:57:56.563390 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/560816f0-4040-43a0-8a73-84500a0aad9c-config-data" (OuterVolumeSpecName: "config-data") pod "560816f0-4040-43a0-8a73-84500a0aad9c" (UID: "560816f0-4040-43a0-8a73-84500a0aad9c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:57:56 crc kubenswrapper[4706]: I1125 11:57:56.632138 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w75bv\" (UniqueName: \"kubernetes.io/projected/560816f0-4040-43a0-8a73-84500a0aad9c-kube-api-access-w75bv\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:56 crc kubenswrapper[4706]: I1125 11:57:56.632460 4706 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/560816f0-4040-43a0-8a73-84500a0aad9c-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:56 crc kubenswrapper[4706]: I1125 11:57:56.632548 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/560816f0-4040-43a0-8a73-84500a0aad9c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:56 crc kubenswrapper[4706]: I1125 11:57:56.632629 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/560816f0-4040-43a0-8a73-84500a0aad9c-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:57:57 crc kubenswrapper[4706]: I1125 11:57:57.061541 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zbtll" event={"ID":"560816f0-4040-43a0-8a73-84500a0aad9c","Type":"ContainerDied","Data":"7037619fd9b8c4df32b299c4499e01a5c68f141eedd99724d6dc413e5c35c1d8"} Nov 25 11:57:57 crc kubenswrapper[4706]: I1125 11:57:57.061945 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7037619fd9b8c4df32b299c4499e01a5c68f141eedd99724d6dc413e5c35c1d8" Nov 25 11:57:57 crc kubenswrapper[4706]: I1125 11:57:57.062246 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zbtll" Nov 25 11:57:57 crc kubenswrapper[4706]: I1125 11:57:57.168379 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 11:57:57 crc kubenswrapper[4706]: E1125 11:57:57.168839 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="560816f0-4040-43a0-8a73-84500a0aad9c" containerName="nova-cell0-conductor-db-sync" Nov 25 11:57:57 crc kubenswrapper[4706]: I1125 11:57:57.168864 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="560816f0-4040-43a0-8a73-84500a0aad9c" containerName="nova-cell0-conductor-db-sync" Nov 25 11:57:57 crc kubenswrapper[4706]: I1125 11:57:57.169091 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="560816f0-4040-43a0-8a73-84500a0aad9c" containerName="nova-cell0-conductor-db-sync" Nov 25 11:57:57 crc kubenswrapper[4706]: I1125 11:57:57.169915 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 25 11:57:57 crc kubenswrapper[4706]: I1125 11:57:57.171960 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-dxkkg" Nov 25 11:57:57 crc kubenswrapper[4706]: I1125 11:57:57.172172 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 25 11:57:57 crc kubenswrapper[4706]: I1125 11:57:57.185731 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 11:57:57 crc kubenswrapper[4706]: I1125 11:57:57.345949 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f550fc56-7c91-4ca6-b10e-6394166b34c8-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f550fc56-7c91-4ca6-b10e-6394166b34c8\") " pod="openstack/nova-cell0-conductor-0" Nov 25 11:57:57 crc kubenswrapper[4706]: I1125 11:57:57.346161 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f550fc56-7c91-4ca6-b10e-6394166b34c8-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f550fc56-7c91-4ca6-b10e-6394166b34c8\") " pod="openstack/nova-cell0-conductor-0" Nov 25 11:57:57 crc kubenswrapper[4706]: I1125 11:57:57.346245 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k744j\" (UniqueName: \"kubernetes.io/projected/f550fc56-7c91-4ca6-b10e-6394166b34c8-kube-api-access-k744j\") pod \"nova-cell0-conductor-0\" (UID: \"f550fc56-7c91-4ca6-b10e-6394166b34c8\") " pod="openstack/nova-cell0-conductor-0" Nov 25 11:57:57 crc kubenswrapper[4706]: I1125 11:57:57.447915 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f550fc56-7c91-4ca6-b10e-6394166b34c8-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f550fc56-7c91-4ca6-b10e-6394166b34c8\") " pod="openstack/nova-cell0-conductor-0" Nov 25 11:57:57 crc kubenswrapper[4706]: I1125 11:57:57.448896 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f550fc56-7c91-4ca6-b10e-6394166b34c8-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f550fc56-7c91-4ca6-b10e-6394166b34c8\") " pod="openstack/nova-cell0-conductor-0" Nov 25 11:57:57 crc kubenswrapper[4706]: I1125 11:57:57.449045 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k744j\" (UniqueName: \"kubernetes.io/projected/f550fc56-7c91-4ca6-b10e-6394166b34c8-kube-api-access-k744j\") pod \"nova-cell0-conductor-0\" (UID: \"f550fc56-7c91-4ca6-b10e-6394166b34c8\") " pod="openstack/nova-cell0-conductor-0" Nov 25 11:57:57 crc kubenswrapper[4706]: I1125 11:57:57.453204 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f550fc56-7c91-4ca6-b10e-6394166b34c8-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f550fc56-7c91-4ca6-b10e-6394166b34c8\") " pod="openstack/nova-cell0-conductor-0" Nov 25 11:57:57 crc kubenswrapper[4706]: I1125 11:57:57.453772 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f550fc56-7c91-4ca6-b10e-6394166b34c8-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f550fc56-7c91-4ca6-b10e-6394166b34c8\") " pod="openstack/nova-cell0-conductor-0" Nov 25 11:57:57 crc kubenswrapper[4706]: I1125 11:57:57.465257 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k744j\" (UniqueName: \"kubernetes.io/projected/f550fc56-7c91-4ca6-b10e-6394166b34c8-kube-api-access-k744j\") pod \"nova-cell0-conductor-0\" (UID: \"f550fc56-7c91-4ca6-b10e-6394166b34c8\") " pod="openstack/nova-cell0-conductor-0" Nov 25 11:57:57 crc kubenswrapper[4706]: I1125 11:57:57.497998 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 25 11:57:57 crc kubenswrapper[4706]: I1125 11:57:57.972426 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 11:57:58 crc kubenswrapper[4706]: I1125 11:57:58.072053 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f550fc56-7c91-4ca6-b10e-6394166b34c8","Type":"ContainerStarted","Data":"149f21586a723d1eea61ecbd5f42f290cc2eb1e7e1752b940e3c3748d3a6478f"} Nov 25 11:57:59 crc kubenswrapper[4706]: I1125 11:57:59.088511 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f550fc56-7c91-4ca6-b10e-6394166b34c8","Type":"ContainerStarted","Data":"25706ac4f383bfd03bc1e8b7d007cbe3a137f6ce83065f3cabc04c7027a18be2"} Nov 25 11:57:59 crc kubenswrapper[4706]: I1125 11:57:59.088857 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 25 11:57:59 crc kubenswrapper[4706]: I1125 11:57:59.110719 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.110698811 podStartE2EDuration="2.110698811s" podCreationTimestamp="2025-11-25 11:57:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:57:59.104104175 +0000 UTC m=+1288.018661556" watchObservedRunningTime="2025-11-25 11:57:59.110698811 +0000 UTC m=+1288.025256212" Nov 25 11:58:07 crc kubenswrapper[4706]: I1125 11:58:07.533844 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 25 11:58:07 crc kubenswrapper[4706]: I1125 11:58:07.990988 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-cdzkl"] Nov 25 11:58:07 crc kubenswrapper[4706]: I1125 11:58:07.992752 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-cdzkl" Nov 25 11:58:07 crc kubenswrapper[4706]: I1125 11:58:07.995033 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 25 11:58:07 crc kubenswrapper[4706]: I1125 11:58:07.995270 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 25 11:58:07 crc kubenswrapper[4706]: I1125 11:58:07.998252 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-cdzkl"] Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.107613 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.108766 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.111538 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.136644 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.151017 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b100f787-7064-4cac-b5dc-0267ee51f1aa-scripts\") pod \"nova-cell0-cell-mapping-cdzkl\" (UID: \"b100f787-7064-4cac-b5dc-0267ee51f1aa\") " pod="openstack/nova-cell0-cell-mapping-cdzkl" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.151217 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b100f787-7064-4cac-b5dc-0267ee51f1aa-config-data\") pod \"nova-cell0-cell-mapping-cdzkl\" (UID: \"b100f787-7064-4cac-b5dc-0267ee51f1aa\") " pod="openstack/nova-cell0-cell-mapping-cdzkl" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.151250 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbzf9\" (UniqueName: \"kubernetes.io/projected/b100f787-7064-4cac-b5dc-0267ee51f1aa-kube-api-access-sbzf9\") pod \"nova-cell0-cell-mapping-cdzkl\" (UID: \"b100f787-7064-4cac-b5dc-0267ee51f1aa\") " pod="openstack/nova-cell0-cell-mapping-cdzkl" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.151328 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b100f787-7064-4cac-b5dc-0267ee51f1aa-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-cdzkl\" (UID: \"b100f787-7064-4cac-b5dc-0267ee51f1aa\") " pod="openstack/nova-cell0-cell-mapping-cdzkl" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.207037 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.209952 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.212382 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.229535 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.260711 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgv9d\" (UniqueName: \"kubernetes.io/projected/e8b5e2e3-bd67-476c-a80d-555c402d6b10-kube-api-access-fgv9d\") pod \"nova-cell1-novncproxy-0\" (UID: \"e8b5e2e3-bd67-476c-a80d-555c402d6b10\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.261036 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8b5e2e3-bd67-476c-a80d-555c402d6b10-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e8b5e2e3-bd67-476c-a80d-555c402d6b10\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.261119 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8b5e2e3-bd67-476c-a80d-555c402d6b10-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e8b5e2e3-bd67-476c-a80d-555c402d6b10\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.261259 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b100f787-7064-4cac-b5dc-0267ee51f1aa-config-data\") pod \"nova-cell0-cell-mapping-cdzkl\" (UID: \"b100f787-7064-4cac-b5dc-0267ee51f1aa\") " pod="openstack/nova-cell0-cell-mapping-cdzkl" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.261365 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbzf9\" (UniqueName: \"kubernetes.io/projected/b100f787-7064-4cac-b5dc-0267ee51f1aa-kube-api-access-sbzf9\") pod \"nova-cell0-cell-mapping-cdzkl\" (UID: \"b100f787-7064-4cac-b5dc-0267ee51f1aa\") " pod="openstack/nova-cell0-cell-mapping-cdzkl" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.261445 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b100f787-7064-4cac-b5dc-0267ee51f1aa-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-cdzkl\" (UID: \"b100f787-7064-4cac-b5dc-0267ee51f1aa\") " pod="openstack/nova-cell0-cell-mapping-cdzkl" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.261553 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b100f787-7064-4cac-b5dc-0267ee51f1aa-scripts\") pod \"nova-cell0-cell-mapping-cdzkl\" (UID: \"b100f787-7064-4cac-b5dc-0267ee51f1aa\") " pod="openstack/nova-cell0-cell-mapping-cdzkl" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.288215 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b100f787-7064-4cac-b5dc-0267ee51f1aa-scripts\") pod \"nova-cell0-cell-mapping-cdzkl\" (UID: \"b100f787-7064-4cac-b5dc-0267ee51f1aa\") " pod="openstack/nova-cell0-cell-mapping-cdzkl" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.290221 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b100f787-7064-4cac-b5dc-0267ee51f1aa-config-data\") pod \"nova-cell0-cell-mapping-cdzkl\" (UID: \"b100f787-7064-4cac-b5dc-0267ee51f1aa\") " pod="openstack/nova-cell0-cell-mapping-cdzkl" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.296318 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b100f787-7064-4cac-b5dc-0267ee51f1aa-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-cdzkl\" (UID: \"b100f787-7064-4cac-b5dc-0267ee51f1aa\") " pod="openstack/nova-cell0-cell-mapping-cdzkl" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.307819 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.310446 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.312827 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbzf9\" (UniqueName: \"kubernetes.io/projected/b100f787-7064-4cac-b5dc-0267ee51f1aa-kube-api-access-sbzf9\") pod \"nova-cell0-cell-mapping-cdzkl\" (UID: \"b100f787-7064-4cac-b5dc-0267ee51f1aa\") " pod="openstack/nova-cell0-cell-mapping-cdzkl" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.323902 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.330057 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-cdzkl" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.358829 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.372785 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.373960 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.374437 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62968efd-c3bc-4ccb-892f-b1479a5da4cc-config-data\") pod \"nova-api-0\" (UID: \"62968efd-c3bc-4ccb-892f-b1479a5da4cc\") " pod="openstack/nova-api-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.374509 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgv9d\" (UniqueName: \"kubernetes.io/projected/e8b5e2e3-bd67-476c-a80d-555c402d6b10-kube-api-access-fgv9d\") pod \"nova-cell1-novncproxy-0\" (UID: \"e8b5e2e3-bd67-476c-a80d-555c402d6b10\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.374536 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmrgg\" (UniqueName: \"kubernetes.io/projected/62968efd-c3bc-4ccb-892f-b1479a5da4cc-kube-api-access-hmrgg\") pod \"nova-api-0\" (UID: \"62968efd-c3bc-4ccb-892f-b1479a5da4cc\") " pod="openstack/nova-api-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.374569 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8b5e2e3-bd67-476c-a80d-555c402d6b10-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e8b5e2e3-bd67-476c-a80d-555c402d6b10\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.374609 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8b5e2e3-bd67-476c-a80d-555c402d6b10-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e8b5e2e3-bd67-476c-a80d-555c402d6b10\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.374636 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62968efd-c3bc-4ccb-892f-b1479a5da4cc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"62968efd-c3bc-4ccb-892f-b1479a5da4cc\") " pod="openstack/nova-api-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.374667 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62968efd-c3bc-4ccb-892f-b1479a5da4cc-logs\") pod \"nova-api-0\" (UID: \"62968efd-c3bc-4ccb-892f-b1479a5da4cc\") " pod="openstack/nova-api-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.378459 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.387790 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8b5e2e3-bd67-476c-a80d-555c402d6b10-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e8b5e2e3-bd67-476c-a80d-555c402d6b10\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.394375 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.424535 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8b5e2e3-bd67-476c-a80d-555c402d6b10-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e8b5e2e3-bd67-476c-a80d-555c402d6b10\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.429745 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgv9d\" (UniqueName: \"kubernetes.io/projected/e8b5e2e3-bd67-476c-a80d-555c402d6b10-kube-api-access-fgv9d\") pod \"nova-cell1-novncproxy-0\" (UID: \"e8b5e2e3-bd67-476c-a80d-555c402d6b10\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.435595 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.483679 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12913eec-2986-42df-b213-a4466df3001a-logs\") pod \"nova-metadata-0\" (UID: \"12913eec-2986-42df-b213-a4466df3001a\") " pod="openstack/nova-metadata-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.483756 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmrgg\" (UniqueName: \"kubernetes.io/projected/62968efd-c3bc-4ccb-892f-b1479a5da4cc-kube-api-access-hmrgg\") pod \"nova-api-0\" (UID: \"62968efd-c3bc-4ccb-892f-b1479a5da4cc\") " pod="openstack/nova-api-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.483822 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12913eec-2986-42df-b213-a4466df3001a-config-data\") pod \"nova-metadata-0\" (UID: \"12913eec-2986-42df-b213-a4466df3001a\") " pod="openstack/nova-metadata-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.483863 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12913eec-2986-42df-b213-a4466df3001a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"12913eec-2986-42df-b213-a4466df3001a\") " pod="openstack/nova-metadata-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.483891 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62968efd-c3bc-4ccb-892f-b1479a5da4cc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"62968efd-c3bc-4ccb-892f-b1479a5da4cc\") " pod="openstack/nova-api-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.483924 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62968efd-c3bc-4ccb-892f-b1479a5da4cc-logs\") pod \"nova-api-0\" (UID: \"62968efd-c3bc-4ccb-892f-b1479a5da4cc\") " pod="openstack/nova-api-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.483973 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dfcf8c4-dafb-4718-b97d-d0b72e9cff85-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1dfcf8c4-dafb-4718-b97d-d0b72e9cff85\") " pod="openstack/nova-scheduler-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.483991 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dfcf8c4-dafb-4718-b97d-d0b72e9cff85-config-data\") pod \"nova-scheduler-0\" (UID: \"1dfcf8c4-dafb-4718-b97d-d0b72e9cff85\") " pod="openstack/nova-scheduler-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.484016 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzlks\" (UniqueName: \"kubernetes.io/projected/1dfcf8c4-dafb-4718-b97d-d0b72e9cff85-kube-api-access-bzlks\") pod \"nova-scheduler-0\" (UID: \"1dfcf8c4-dafb-4718-b97d-d0b72e9cff85\") " pod="openstack/nova-scheduler-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.484056 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdkjl\" (UniqueName: \"kubernetes.io/projected/12913eec-2986-42df-b213-a4466df3001a-kube-api-access-pdkjl\") pod \"nova-metadata-0\" (UID: \"12913eec-2986-42df-b213-a4466df3001a\") " pod="openstack/nova-metadata-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.484112 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62968efd-c3bc-4ccb-892f-b1479a5da4cc-config-data\") pod \"nova-api-0\" (UID: \"62968efd-c3bc-4ccb-892f-b1479a5da4cc\") " pod="openstack/nova-api-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.486762 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62968efd-c3bc-4ccb-892f-b1479a5da4cc-logs\") pod \"nova-api-0\" (UID: \"62968efd-c3bc-4ccb-892f-b1479a5da4cc\") " pod="openstack/nova-api-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.498942 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-sdx7j"] Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.501018 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-sdx7j" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.501095 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62968efd-c3bc-4ccb-892f-b1479a5da4cc-config-data\") pod \"nova-api-0\" (UID: \"62968efd-c3bc-4ccb-892f-b1479a5da4cc\") " pod="openstack/nova-api-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.508902 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmrgg\" (UniqueName: \"kubernetes.io/projected/62968efd-c3bc-4ccb-892f-b1479a5da4cc-kube-api-access-hmrgg\") pod \"nova-api-0\" (UID: \"62968efd-c3bc-4ccb-892f-b1479a5da4cc\") " pod="openstack/nova-api-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.513849 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-sdx7j"] Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.519694 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62968efd-c3bc-4ccb-892f-b1479a5da4cc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"62968efd-c3bc-4ccb-892f-b1479a5da4cc\") " pod="openstack/nova-api-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.536495 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.591542 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dfcf8c4-dafb-4718-b97d-d0b72e9cff85-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1dfcf8c4-dafb-4718-b97d-d0b72e9cff85\") " pod="openstack/nova-scheduler-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.591579 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dfcf8c4-dafb-4718-b97d-d0b72e9cff85-config-data\") pod \"nova-scheduler-0\" (UID: \"1dfcf8c4-dafb-4718-b97d-d0b72e9cff85\") " pod="openstack/nova-scheduler-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.591601 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzlks\" (UniqueName: \"kubernetes.io/projected/1dfcf8c4-dafb-4718-b97d-d0b72e9cff85-kube-api-access-bzlks\") pod \"nova-scheduler-0\" (UID: \"1dfcf8c4-dafb-4718-b97d-d0b72e9cff85\") " pod="openstack/nova-scheduler-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.591634 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdkjl\" (UniqueName: \"kubernetes.io/projected/12913eec-2986-42df-b213-a4466df3001a-kube-api-access-pdkjl\") pod \"nova-metadata-0\" (UID: \"12913eec-2986-42df-b213-a4466df3001a\") " pod="openstack/nova-metadata-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.591717 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12913eec-2986-42df-b213-a4466df3001a-logs\") pod \"nova-metadata-0\" (UID: \"12913eec-2986-42df-b213-a4466df3001a\") " pod="openstack/nova-metadata-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.591760 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12913eec-2986-42df-b213-a4466df3001a-config-data\") pod \"nova-metadata-0\" (UID: \"12913eec-2986-42df-b213-a4466df3001a\") " pod="openstack/nova-metadata-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.591777 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12913eec-2986-42df-b213-a4466df3001a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"12913eec-2986-42df-b213-a4466df3001a\") " pod="openstack/nova-metadata-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.592563 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12913eec-2986-42df-b213-a4466df3001a-logs\") pod \"nova-metadata-0\" (UID: \"12913eec-2986-42df-b213-a4466df3001a\") " pod="openstack/nova-metadata-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.596105 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dfcf8c4-dafb-4718-b97d-d0b72e9cff85-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1dfcf8c4-dafb-4718-b97d-d0b72e9cff85\") " pod="openstack/nova-scheduler-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.598670 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dfcf8c4-dafb-4718-b97d-d0b72e9cff85-config-data\") pod \"nova-scheduler-0\" (UID: \"1dfcf8c4-dafb-4718-b97d-d0b72e9cff85\") " pod="openstack/nova-scheduler-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.601626 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12913eec-2986-42df-b213-a4466df3001a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"12913eec-2986-42df-b213-a4466df3001a\") " pod="openstack/nova-metadata-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.604476 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12913eec-2986-42df-b213-a4466df3001a-config-data\") pod \"nova-metadata-0\" (UID: \"12913eec-2986-42df-b213-a4466df3001a\") " pod="openstack/nova-metadata-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.614062 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzlks\" (UniqueName: \"kubernetes.io/projected/1dfcf8c4-dafb-4718-b97d-d0b72e9cff85-kube-api-access-bzlks\") pod \"nova-scheduler-0\" (UID: \"1dfcf8c4-dafb-4718-b97d-d0b72e9cff85\") " pod="openstack/nova-scheduler-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.615748 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdkjl\" (UniqueName: \"kubernetes.io/projected/12913eec-2986-42df-b213-a4466df3001a-kube-api-access-pdkjl\") pod \"nova-metadata-0\" (UID: \"12913eec-2986-42df-b213-a4466df3001a\") " pod="openstack/nova-metadata-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.693343 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00683e5c-17fc-450f-b2b4-7366b2c45aa5-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-sdx7j\" (UID: \"00683e5c-17fc-450f-b2b4-7366b2c45aa5\") " pod="openstack/dnsmasq-dns-757b4f8459-sdx7j" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.694281 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/00683e5c-17fc-450f-b2b4-7366b2c45aa5-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-sdx7j\" (UID: \"00683e5c-17fc-450f-b2b4-7366b2c45aa5\") " pod="openstack/dnsmasq-dns-757b4f8459-sdx7j" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.694381 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5bld\" (UniqueName: \"kubernetes.io/projected/00683e5c-17fc-450f-b2b4-7366b2c45aa5-kube-api-access-t5bld\") pod \"dnsmasq-dns-757b4f8459-sdx7j\" (UID: \"00683e5c-17fc-450f-b2b4-7366b2c45aa5\") " pod="openstack/dnsmasq-dns-757b4f8459-sdx7j" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.694459 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00683e5c-17fc-450f-b2b4-7366b2c45aa5-dns-svc\") pod \"dnsmasq-dns-757b4f8459-sdx7j\" (UID: \"00683e5c-17fc-450f-b2b4-7366b2c45aa5\") " pod="openstack/dnsmasq-dns-757b4f8459-sdx7j" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.694654 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00683e5c-17fc-450f-b2b4-7366b2c45aa5-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-sdx7j\" (UID: \"00683e5c-17fc-450f-b2b4-7366b2c45aa5\") " pod="openstack/dnsmasq-dns-757b4f8459-sdx7j" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.694677 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00683e5c-17fc-450f-b2b4-7366b2c45aa5-config\") pod \"dnsmasq-dns-757b4f8459-sdx7j\" (UID: \"00683e5c-17fc-450f-b2b4-7366b2c45aa5\") " pod="openstack/dnsmasq-dns-757b4f8459-sdx7j" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.796532 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00683e5c-17fc-450f-b2b4-7366b2c45aa5-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-sdx7j\" (UID: \"00683e5c-17fc-450f-b2b4-7366b2c45aa5\") " pod="openstack/dnsmasq-dns-757b4f8459-sdx7j" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.796582 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00683e5c-17fc-450f-b2b4-7366b2c45aa5-config\") pod \"dnsmasq-dns-757b4f8459-sdx7j\" (UID: \"00683e5c-17fc-450f-b2b4-7366b2c45aa5\") " pod="openstack/dnsmasq-dns-757b4f8459-sdx7j" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.796620 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00683e5c-17fc-450f-b2b4-7366b2c45aa5-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-sdx7j\" (UID: \"00683e5c-17fc-450f-b2b4-7366b2c45aa5\") " pod="openstack/dnsmasq-dns-757b4f8459-sdx7j" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.796659 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/00683e5c-17fc-450f-b2b4-7366b2c45aa5-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-sdx7j\" (UID: \"00683e5c-17fc-450f-b2b4-7366b2c45aa5\") " pod="openstack/dnsmasq-dns-757b4f8459-sdx7j" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.796701 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5bld\" (UniqueName: \"kubernetes.io/projected/00683e5c-17fc-450f-b2b4-7366b2c45aa5-kube-api-access-t5bld\") pod \"dnsmasq-dns-757b4f8459-sdx7j\" (UID: \"00683e5c-17fc-450f-b2b4-7366b2c45aa5\") " pod="openstack/dnsmasq-dns-757b4f8459-sdx7j" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.796725 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00683e5c-17fc-450f-b2b4-7366b2c45aa5-dns-svc\") pod \"dnsmasq-dns-757b4f8459-sdx7j\" (UID: \"00683e5c-17fc-450f-b2b4-7366b2c45aa5\") " pod="openstack/dnsmasq-dns-757b4f8459-sdx7j" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.797595 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00683e5c-17fc-450f-b2b4-7366b2c45aa5-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-sdx7j\" (UID: \"00683e5c-17fc-450f-b2b4-7366b2c45aa5\") " pod="openstack/dnsmasq-dns-757b4f8459-sdx7j" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.798177 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00683e5c-17fc-450f-b2b4-7366b2c45aa5-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-sdx7j\" (UID: \"00683e5c-17fc-450f-b2b4-7366b2c45aa5\") " pod="openstack/dnsmasq-dns-757b4f8459-sdx7j" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.797605 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00683e5c-17fc-450f-b2b4-7366b2c45aa5-dns-svc\") pod \"dnsmasq-dns-757b4f8459-sdx7j\" (UID: \"00683e5c-17fc-450f-b2b4-7366b2c45aa5\") " pod="openstack/dnsmasq-dns-757b4f8459-sdx7j" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.798733 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/00683e5c-17fc-450f-b2b4-7366b2c45aa5-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-sdx7j\" (UID: \"00683e5c-17fc-450f-b2b4-7366b2c45aa5\") " pod="openstack/dnsmasq-dns-757b4f8459-sdx7j" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.798841 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00683e5c-17fc-450f-b2b4-7366b2c45aa5-config\") pod \"dnsmasq-dns-757b4f8459-sdx7j\" (UID: \"00683e5c-17fc-450f-b2b4-7366b2c45aa5\") " pod="openstack/dnsmasq-dns-757b4f8459-sdx7j" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.816875 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5bld\" (UniqueName: \"kubernetes.io/projected/00683e5c-17fc-450f-b2b4-7366b2c45aa5-kube-api-access-t5bld\") pod \"dnsmasq-dns-757b4f8459-sdx7j\" (UID: \"00683e5c-17fc-450f-b2b4-7366b2c45aa5\") " pod="openstack/dnsmasq-dns-757b4f8459-sdx7j" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.868566 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.886950 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 11:58:08 crc kubenswrapper[4706]: I1125 11:58:08.899842 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-sdx7j" Nov 25 11:58:09 crc kubenswrapper[4706]: I1125 11:58:09.023217 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-cdzkl"] Nov 25 11:58:09 crc kubenswrapper[4706]: W1125 11:58:09.045497 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb100f787_7064_4cac_b5dc_0267ee51f1aa.slice/crio-b70097cf36759439d3ec9656d7bbbb87f9553959adf613fbc9857718ec80cdfb WatchSource:0}: Error finding container b70097cf36759439d3ec9656d7bbbb87f9553959adf613fbc9857718ec80cdfb: Status 404 returned error can't find the container with id b70097cf36759439d3ec9656d7bbbb87f9553959adf613fbc9857718ec80cdfb Nov 25 11:58:09 crc kubenswrapper[4706]: I1125 11:58:09.134200 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-87sfg"] Nov 25 11:58:09 crc kubenswrapper[4706]: I1125 11:58:09.135575 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-87sfg" Nov 25 11:58:09 crc kubenswrapper[4706]: I1125 11:58:09.139251 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 25 11:58:09 crc kubenswrapper[4706]: I1125 11:58:09.140082 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 25 11:58:09 crc kubenswrapper[4706]: I1125 11:58:09.161661 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-87sfg"] Nov 25 11:58:09 crc kubenswrapper[4706]: I1125 11:58:09.184570 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-cdzkl" event={"ID":"b100f787-7064-4cac-b5dc-0267ee51f1aa","Type":"ContainerStarted","Data":"b70097cf36759439d3ec9656d7bbbb87f9553959adf613fbc9857718ec80cdfb"} Nov 25 11:58:09 crc kubenswrapper[4706]: I1125 11:58:09.205242 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 11:58:09 crc kubenswrapper[4706]: I1125 11:58:09.222737 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 11:58:09 crc kubenswrapper[4706]: I1125 11:58:09.262152 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 11:58:09 crc kubenswrapper[4706]: I1125 11:58:09.309082 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca66dab3-01b2-4fac-b6c9-c09b2704a670-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-87sfg\" (UID: \"ca66dab3-01b2-4fac-b6c9-c09b2704a670\") " pod="openstack/nova-cell1-conductor-db-sync-87sfg" Nov 25 11:58:09 crc kubenswrapper[4706]: I1125 11:58:09.309602 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca66dab3-01b2-4fac-b6c9-c09b2704a670-scripts\") pod \"nova-cell1-conductor-db-sync-87sfg\" (UID: \"ca66dab3-01b2-4fac-b6c9-c09b2704a670\") " pod="openstack/nova-cell1-conductor-db-sync-87sfg" Nov 25 11:58:09 crc kubenswrapper[4706]: I1125 11:58:09.309651 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg42n\" (UniqueName: \"kubernetes.io/projected/ca66dab3-01b2-4fac-b6c9-c09b2704a670-kube-api-access-dg42n\") pod \"nova-cell1-conductor-db-sync-87sfg\" (UID: \"ca66dab3-01b2-4fac-b6c9-c09b2704a670\") " pod="openstack/nova-cell1-conductor-db-sync-87sfg" Nov 25 11:58:09 crc kubenswrapper[4706]: I1125 11:58:09.309714 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca66dab3-01b2-4fac-b6c9-c09b2704a670-config-data\") pod \"nova-cell1-conductor-db-sync-87sfg\" (UID: \"ca66dab3-01b2-4fac-b6c9-c09b2704a670\") " pod="openstack/nova-cell1-conductor-db-sync-87sfg" Nov 25 11:58:09 crc kubenswrapper[4706]: I1125 11:58:09.412037 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca66dab3-01b2-4fac-b6c9-c09b2704a670-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-87sfg\" (UID: \"ca66dab3-01b2-4fac-b6c9-c09b2704a670\") " pod="openstack/nova-cell1-conductor-db-sync-87sfg" Nov 25 11:58:09 crc kubenswrapper[4706]: I1125 11:58:09.412131 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca66dab3-01b2-4fac-b6c9-c09b2704a670-scripts\") pod \"nova-cell1-conductor-db-sync-87sfg\" (UID: \"ca66dab3-01b2-4fac-b6c9-c09b2704a670\") " pod="openstack/nova-cell1-conductor-db-sync-87sfg" Nov 25 11:58:09 crc kubenswrapper[4706]: I1125 11:58:09.412165 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg42n\" (UniqueName: \"kubernetes.io/projected/ca66dab3-01b2-4fac-b6c9-c09b2704a670-kube-api-access-dg42n\") pod \"nova-cell1-conductor-db-sync-87sfg\" (UID: \"ca66dab3-01b2-4fac-b6c9-c09b2704a670\") " pod="openstack/nova-cell1-conductor-db-sync-87sfg" Nov 25 11:58:09 crc kubenswrapper[4706]: I1125 11:58:09.412229 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca66dab3-01b2-4fac-b6c9-c09b2704a670-config-data\") pod \"nova-cell1-conductor-db-sync-87sfg\" (UID: \"ca66dab3-01b2-4fac-b6c9-c09b2704a670\") " pod="openstack/nova-cell1-conductor-db-sync-87sfg" Nov 25 11:58:09 crc kubenswrapper[4706]: I1125 11:58:09.417875 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca66dab3-01b2-4fac-b6c9-c09b2704a670-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-87sfg\" (UID: \"ca66dab3-01b2-4fac-b6c9-c09b2704a670\") " pod="openstack/nova-cell1-conductor-db-sync-87sfg" Nov 25 11:58:09 crc kubenswrapper[4706]: I1125 11:58:09.418025 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca66dab3-01b2-4fac-b6c9-c09b2704a670-scripts\") pod \"nova-cell1-conductor-db-sync-87sfg\" (UID: \"ca66dab3-01b2-4fac-b6c9-c09b2704a670\") " pod="openstack/nova-cell1-conductor-db-sync-87sfg" Nov 25 11:58:09 crc kubenswrapper[4706]: I1125 11:58:09.418765 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca66dab3-01b2-4fac-b6c9-c09b2704a670-config-data\") pod \"nova-cell1-conductor-db-sync-87sfg\" (UID: \"ca66dab3-01b2-4fac-b6c9-c09b2704a670\") " pod="openstack/nova-cell1-conductor-db-sync-87sfg" Nov 25 11:58:09 crc kubenswrapper[4706]: I1125 11:58:09.431147 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg42n\" (UniqueName: \"kubernetes.io/projected/ca66dab3-01b2-4fac-b6c9-c09b2704a670-kube-api-access-dg42n\") pod \"nova-cell1-conductor-db-sync-87sfg\" (UID: \"ca66dab3-01b2-4fac-b6c9-c09b2704a670\") " pod="openstack/nova-cell1-conductor-db-sync-87sfg" Nov 25 11:58:09 crc kubenswrapper[4706]: I1125 11:58:09.483699 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-87sfg" Nov 25 11:58:09 crc kubenswrapper[4706]: I1125 11:58:09.506185 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 11:58:09 crc kubenswrapper[4706]: W1125 11:58:09.554618 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1dfcf8c4_dafb_4718_b97d_d0b72e9cff85.slice/crio-eba3a3e01e82d44d4ba0ebcb7523c34819fca45d346861985dd7f352829acda9 WatchSource:0}: Error finding container eba3a3e01e82d44d4ba0ebcb7523c34819fca45d346861985dd7f352829acda9: Status 404 returned error can't find the container with id eba3a3e01e82d44d4ba0ebcb7523c34819fca45d346861985dd7f352829acda9 Nov 25 11:58:09 crc kubenswrapper[4706]: I1125 11:58:09.587780 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-sdx7j"] Nov 25 11:58:09 crc kubenswrapper[4706]: W1125 11:58:09.602066 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00683e5c_17fc_450f_b2b4_7366b2c45aa5.slice/crio-b3061532f126ece1f7b1e665799c80ff911244947dc65c8d32100828556c70d4 WatchSource:0}: Error finding container b3061532f126ece1f7b1e665799c80ff911244947dc65c8d32100828556c70d4: Status 404 returned error can't find the container with id b3061532f126ece1f7b1e665799c80ff911244947dc65c8d32100828556c70d4 Nov 25 11:58:10 crc kubenswrapper[4706]: I1125 11:58:10.076255 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-87sfg"] Nov 25 11:58:10 crc kubenswrapper[4706]: I1125 11:58:10.250430 4706 generic.go:334] "Generic (PLEG): container finished" podID="00683e5c-17fc-450f-b2b4-7366b2c45aa5" containerID="f31ca09f5f303f093e6f2ed36404c2e852ba4fa5400ac58ba28965b70763ec99" exitCode=0 Nov 25 11:58:10 crc kubenswrapper[4706]: I1125 11:58:10.250546 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-sdx7j" event={"ID":"00683e5c-17fc-450f-b2b4-7366b2c45aa5","Type":"ContainerDied","Data":"f31ca09f5f303f093e6f2ed36404c2e852ba4fa5400ac58ba28965b70763ec99"} Nov 25 11:58:10 crc kubenswrapper[4706]: I1125 11:58:10.250574 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-sdx7j" event={"ID":"00683e5c-17fc-450f-b2b4-7366b2c45aa5","Type":"ContainerStarted","Data":"b3061532f126ece1f7b1e665799c80ff911244947dc65c8d32100828556c70d4"} Nov 25 11:58:10 crc kubenswrapper[4706]: I1125 11:58:10.270556 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1dfcf8c4-dafb-4718-b97d-d0b72e9cff85","Type":"ContainerStarted","Data":"eba3a3e01e82d44d4ba0ebcb7523c34819fca45d346861985dd7f352829acda9"} Nov 25 11:58:10 crc kubenswrapper[4706]: I1125 11:58:10.279811 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e8b5e2e3-bd67-476c-a80d-555c402d6b10","Type":"ContainerStarted","Data":"dd5a3bbd64fe6166def0fc32e0155c1dd26ca79adaaa8349ca1a30ffbf9fa094"} Nov 25 11:58:10 crc kubenswrapper[4706]: I1125 11:58:10.286837 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-cdzkl" event={"ID":"b100f787-7064-4cac-b5dc-0267ee51f1aa","Type":"ContainerStarted","Data":"3a709eace25238e86be74d86326ea1f6b1bf19eb76991c148775350e05599dbd"} Nov 25 11:58:10 crc kubenswrapper[4706]: I1125 11:58:10.298771 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"12913eec-2986-42df-b213-a4466df3001a","Type":"ContainerStarted","Data":"1c3795b4834dd5dc9524530647557b75689f35142af56ed1bd2a454d3722dcf1"} Nov 25 11:58:10 crc kubenswrapper[4706]: I1125 11:58:10.302644 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"62968efd-c3bc-4ccb-892f-b1479a5da4cc","Type":"ContainerStarted","Data":"cbef7341c4fcb241e42eb0880f344f532e1ef21053656261dd7fde9b1f0406ac"} Nov 25 11:58:10 crc kubenswrapper[4706]: I1125 11:58:10.307047 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-87sfg" event={"ID":"ca66dab3-01b2-4fac-b6c9-c09b2704a670","Type":"ContainerStarted","Data":"3e39b9703a7a95bf7a14a6f3c9ccd658d28a126819ce8e3d0000d1eaba584128"} Nov 25 11:58:10 crc kubenswrapper[4706]: I1125 11:58:10.329452 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-cdzkl" podStartSLOduration=3.329368569 podStartE2EDuration="3.329368569s" podCreationTimestamp="2025-11-25 11:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:58:10.32267604 +0000 UTC m=+1299.237233441" watchObservedRunningTime="2025-11-25 11:58:10.329368569 +0000 UTC m=+1299.243925950" Nov 25 11:58:11 crc kubenswrapper[4706]: I1125 11:58:11.330889 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-87sfg" event={"ID":"ca66dab3-01b2-4fac-b6c9-c09b2704a670","Type":"ContainerStarted","Data":"9b1df5c4ecad9cb3a75eba378c36c215fa265f87ab49ad1f7014d8f2630e77ff"} Nov 25 11:58:11 crc kubenswrapper[4706]: I1125 11:58:11.335139 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-sdx7j" event={"ID":"00683e5c-17fc-450f-b2b4-7366b2c45aa5","Type":"ContainerStarted","Data":"90da4e447f7329491aef2e8de9b7d3b2e05711e48c916ae2fc14256e76a9eee3"} Nov 25 11:58:11 crc kubenswrapper[4706]: I1125 11:58:11.351887 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-87sfg" podStartSLOduration=2.351869717 podStartE2EDuration="2.351869717s" podCreationTimestamp="2025-11-25 11:58:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:58:11.349899477 +0000 UTC m=+1300.264456878" watchObservedRunningTime="2025-11-25 11:58:11.351869717 +0000 UTC m=+1300.266427098" Nov 25 11:58:11 crc kubenswrapper[4706]: I1125 11:58:11.405460 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-sdx7j" podStartSLOduration=3.405438828 podStartE2EDuration="3.405438828s" podCreationTimestamp="2025-11-25 11:58:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:58:11.398835351 +0000 UTC m=+1300.313392732" watchObservedRunningTime="2025-11-25 11:58:11.405438828 +0000 UTC m=+1300.319996209" Nov 25 11:58:12 crc kubenswrapper[4706]: I1125 11:58:12.132054 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 11:58:12 crc kubenswrapper[4706]: I1125 11:58:12.148663 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 11:58:12 crc kubenswrapper[4706]: I1125 11:58:12.293669 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 25 11:58:12 crc kubenswrapper[4706]: I1125 11:58:12.345030 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-sdx7j" Nov 25 11:58:14 crc kubenswrapper[4706]: I1125 11:58:14.361252 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1dfcf8c4-dafb-4718-b97d-d0b72e9cff85","Type":"ContainerStarted","Data":"ef241260c1cbe817bb94689eae45d934ab69fa96a5ffe387e49137fd360175c1"} Nov 25 11:58:14 crc kubenswrapper[4706]: I1125 11:58:14.363814 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e8b5e2e3-bd67-476c-a80d-555c402d6b10","Type":"ContainerStarted","Data":"7fcc2fade0cfd4ac61dc8eb95debe757d544a2b64a5ccc888c4bec81573ba0bc"} Nov 25 11:58:14 crc kubenswrapper[4706]: I1125 11:58:14.363991 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="e8b5e2e3-bd67-476c-a80d-555c402d6b10" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://7fcc2fade0cfd4ac61dc8eb95debe757d544a2b64a5ccc888c4bec81573ba0bc" gracePeriod=30 Nov 25 11:58:14 crc kubenswrapper[4706]: I1125 11:58:14.366590 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"12913eec-2986-42df-b213-a4466df3001a","Type":"ContainerStarted","Data":"89d094427d4d3808804db8f8787f68f404f922caf39d0fd9e2ba76214ed40e27"} Nov 25 11:58:14 crc kubenswrapper[4706]: I1125 11:58:14.366635 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"12913eec-2986-42df-b213-a4466df3001a","Type":"ContainerStarted","Data":"04b9b25d0c5ac65e7e0540c31539d0fb64bc82aa36c7d0a2451becacef408b19"} Nov 25 11:58:14 crc kubenswrapper[4706]: I1125 11:58:14.366663 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="12913eec-2986-42df-b213-a4466df3001a" containerName="nova-metadata-log" containerID="cri-o://04b9b25d0c5ac65e7e0540c31539d0fb64bc82aa36c7d0a2451becacef408b19" gracePeriod=30 Nov 25 11:58:14 crc kubenswrapper[4706]: I1125 11:58:14.366683 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="12913eec-2986-42df-b213-a4466df3001a" containerName="nova-metadata-metadata" containerID="cri-o://89d094427d4d3808804db8f8787f68f404f922caf39d0fd9e2ba76214ed40e27" gracePeriod=30 Nov 25 11:58:14 crc kubenswrapper[4706]: I1125 11:58:14.368729 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"62968efd-c3bc-4ccb-892f-b1479a5da4cc","Type":"ContainerStarted","Data":"7446f31b337cd4625add204856cf1a631ec9341af4fe1f59547a39610254999f"} Nov 25 11:58:14 crc kubenswrapper[4706]: I1125 11:58:14.368770 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"62968efd-c3bc-4ccb-892f-b1479a5da4cc","Type":"ContainerStarted","Data":"a2e980f8ad229edb2c569d7035e08209d34cd0fa079ca7c46fdfe3210380545f"} Nov 25 11:58:14 crc kubenswrapper[4706]: I1125 11:58:14.379918 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.806586475 podStartE2EDuration="6.379900294s" podCreationTimestamp="2025-11-25 11:58:08 +0000 UTC" firstStartedPulling="2025-11-25 11:58:09.567046883 +0000 UTC m=+1298.481604264" lastFinishedPulling="2025-11-25 11:58:13.140360702 +0000 UTC m=+1302.054918083" observedRunningTime="2025-11-25 11:58:14.377172655 +0000 UTC m=+1303.291730036" watchObservedRunningTime="2025-11-25 11:58:14.379900294 +0000 UTC m=+1303.294457675" Nov 25 11:58:14 crc kubenswrapper[4706]: I1125 11:58:14.406654 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.478662483 podStartE2EDuration="6.406634618s" podCreationTimestamp="2025-11-25 11:58:08 +0000 UTC" firstStartedPulling="2025-11-25 11:58:09.211809983 +0000 UTC m=+1298.126367364" lastFinishedPulling="2025-11-25 11:58:13.139782118 +0000 UTC m=+1302.054339499" observedRunningTime="2025-11-25 11:58:14.396895613 +0000 UTC m=+1303.311453014" watchObservedRunningTime="2025-11-25 11:58:14.406634618 +0000 UTC m=+1303.321191999" Nov 25 11:58:14 crc kubenswrapper[4706]: I1125 11:58:14.415156 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.521435913 podStartE2EDuration="6.415133753s" podCreationTimestamp="2025-11-25 11:58:08 +0000 UTC" firstStartedPulling="2025-11-25 11:58:09.2461672 +0000 UTC m=+1298.160724581" lastFinishedPulling="2025-11-25 11:58:13.13986505 +0000 UTC m=+1302.054422421" observedRunningTime="2025-11-25 11:58:14.414343353 +0000 UTC m=+1303.328900734" watchObservedRunningTime="2025-11-25 11:58:14.415133753 +0000 UTC m=+1303.329691144" Nov 25 11:58:14 crc kubenswrapper[4706]: I1125 11:58:14.435433 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.562430436 podStartE2EDuration="6.435416844s" podCreationTimestamp="2025-11-25 11:58:08 +0000 UTC" firstStartedPulling="2025-11-25 11:58:09.271878008 +0000 UTC m=+1298.186435389" lastFinishedPulling="2025-11-25 11:58:13.144864416 +0000 UTC m=+1302.059421797" observedRunningTime="2025-11-25 11:58:14.434343927 +0000 UTC m=+1303.348901308" watchObservedRunningTime="2025-11-25 11:58:14.435416844 +0000 UTC m=+1303.349974225" Nov 25 11:58:14 crc kubenswrapper[4706]: I1125 11:58:14.906361 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.038621 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12913eec-2986-42df-b213-a4466df3001a-config-data\") pod \"12913eec-2986-42df-b213-a4466df3001a\" (UID: \"12913eec-2986-42df-b213-a4466df3001a\") " Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.038692 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12913eec-2986-42df-b213-a4466df3001a-logs\") pod \"12913eec-2986-42df-b213-a4466df3001a\" (UID: \"12913eec-2986-42df-b213-a4466df3001a\") " Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.038759 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12913eec-2986-42df-b213-a4466df3001a-combined-ca-bundle\") pod \"12913eec-2986-42df-b213-a4466df3001a\" (UID: \"12913eec-2986-42df-b213-a4466df3001a\") " Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.038832 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdkjl\" (UniqueName: \"kubernetes.io/projected/12913eec-2986-42df-b213-a4466df3001a-kube-api-access-pdkjl\") pod \"12913eec-2986-42df-b213-a4466df3001a\" (UID: \"12913eec-2986-42df-b213-a4466df3001a\") " Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.039098 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12913eec-2986-42df-b213-a4466df3001a-logs" (OuterVolumeSpecName: "logs") pod "12913eec-2986-42df-b213-a4466df3001a" (UID: "12913eec-2986-42df-b213-a4466df3001a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.039854 4706 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12913eec-2986-42df-b213-a4466df3001a-logs\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.044013 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12913eec-2986-42df-b213-a4466df3001a-kube-api-access-pdkjl" (OuterVolumeSpecName: "kube-api-access-pdkjl") pod "12913eec-2986-42df-b213-a4466df3001a" (UID: "12913eec-2986-42df-b213-a4466df3001a"). InnerVolumeSpecName "kube-api-access-pdkjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.081817 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12913eec-2986-42df-b213-a4466df3001a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12913eec-2986-42df-b213-a4466df3001a" (UID: "12913eec-2986-42df-b213-a4466df3001a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.087739 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12913eec-2986-42df-b213-a4466df3001a-config-data" (OuterVolumeSpecName: "config-data") pod "12913eec-2986-42df-b213-a4466df3001a" (UID: "12913eec-2986-42df-b213-a4466df3001a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.141485 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12913eec-2986-42df-b213-a4466df3001a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.141528 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdkjl\" (UniqueName: \"kubernetes.io/projected/12913eec-2986-42df-b213-a4466df3001a-kube-api-access-pdkjl\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.141545 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12913eec-2986-42df-b213-a4466df3001a-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.379529 4706 generic.go:334] "Generic (PLEG): container finished" podID="12913eec-2986-42df-b213-a4466df3001a" containerID="89d094427d4d3808804db8f8787f68f404f922caf39d0fd9e2ba76214ed40e27" exitCode=0 Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.379559 4706 generic.go:334] "Generic (PLEG): container finished" podID="12913eec-2986-42df-b213-a4466df3001a" containerID="04b9b25d0c5ac65e7e0540c31539d0fb64bc82aa36c7d0a2451becacef408b19" exitCode=143 Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.379581 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"12913eec-2986-42df-b213-a4466df3001a","Type":"ContainerDied","Data":"89d094427d4d3808804db8f8787f68f404f922caf39d0fd9e2ba76214ed40e27"} Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.379603 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.379662 4706 scope.go:117] "RemoveContainer" containerID="89d094427d4d3808804db8f8787f68f404f922caf39d0fd9e2ba76214ed40e27" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.379648 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"12913eec-2986-42df-b213-a4466df3001a","Type":"ContainerDied","Data":"04b9b25d0c5ac65e7e0540c31539d0fb64bc82aa36c7d0a2451becacef408b19"} Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.379696 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"12913eec-2986-42df-b213-a4466df3001a","Type":"ContainerDied","Data":"1c3795b4834dd5dc9524530647557b75689f35142af56ed1bd2a454d3722dcf1"} Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.416447 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.418493 4706 scope.go:117] "RemoveContainer" containerID="04b9b25d0c5ac65e7e0540c31539d0fb64bc82aa36c7d0a2451becacef408b19" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.436777 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.440577 4706 scope.go:117] "RemoveContainer" containerID="89d094427d4d3808804db8f8787f68f404f922caf39d0fd9e2ba76214ed40e27" Nov 25 11:58:15 crc kubenswrapper[4706]: E1125 11:58:15.440955 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89d094427d4d3808804db8f8787f68f404f922caf39d0fd9e2ba76214ed40e27\": container with ID starting with 89d094427d4d3808804db8f8787f68f404f922caf39d0fd9e2ba76214ed40e27 not found: ID does not exist" containerID="89d094427d4d3808804db8f8787f68f404f922caf39d0fd9e2ba76214ed40e27" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.440983 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89d094427d4d3808804db8f8787f68f404f922caf39d0fd9e2ba76214ed40e27"} err="failed to get container status \"89d094427d4d3808804db8f8787f68f404f922caf39d0fd9e2ba76214ed40e27\": rpc error: code = NotFound desc = could not find container \"89d094427d4d3808804db8f8787f68f404f922caf39d0fd9e2ba76214ed40e27\": container with ID starting with 89d094427d4d3808804db8f8787f68f404f922caf39d0fd9e2ba76214ed40e27 not found: ID does not exist" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.441004 4706 scope.go:117] "RemoveContainer" containerID="04b9b25d0c5ac65e7e0540c31539d0fb64bc82aa36c7d0a2451becacef408b19" Nov 25 11:58:15 crc kubenswrapper[4706]: E1125 11:58:15.441169 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04b9b25d0c5ac65e7e0540c31539d0fb64bc82aa36c7d0a2451becacef408b19\": container with ID starting with 04b9b25d0c5ac65e7e0540c31539d0fb64bc82aa36c7d0a2451becacef408b19 not found: ID does not exist" containerID="04b9b25d0c5ac65e7e0540c31539d0fb64bc82aa36c7d0a2451becacef408b19" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.441185 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04b9b25d0c5ac65e7e0540c31539d0fb64bc82aa36c7d0a2451becacef408b19"} err="failed to get container status \"04b9b25d0c5ac65e7e0540c31539d0fb64bc82aa36c7d0a2451becacef408b19\": rpc error: code = NotFound desc = could not find container \"04b9b25d0c5ac65e7e0540c31539d0fb64bc82aa36c7d0a2451becacef408b19\": container with ID starting with 04b9b25d0c5ac65e7e0540c31539d0fb64bc82aa36c7d0a2451becacef408b19 not found: ID does not exist" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.441197 4706 scope.go:117] "RemoveContainer" containerID="89d094427d4d3808804db8f8787f68f404f922caf39d0fd9e2ba76214ed40e27" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.441406 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89d094427d4d3808804db8f8787f68f404f922caf39d0fd9e2ba76214ed40e27"} err="failed to get container status \"89d094427d4d3808804db8f8787f68f404f922caf39d0fd9e2ba76214ed40e27\": rpc error: code = NotFound desc = could not find container \"89d094427d4d3808804db8f8787f68f404f922caf39d0fd9e2ba76214ed40e27\": container with ID starting with 89d094427d4d3808804db8f8787f68f404f922caf39d0fd9e2ba76214ed40e27 not found: ID does not exist" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.441420 4706 scope.go:117] "RemoveContainer" containerID="04b9b25d0c5ac65e7e0540c31539d0fb64bc82aa36c7d0a2451becacef408b19" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.441593 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04b9b25d0c5ac65e7e0540c31539d0fb64bc82aa36c7d0a2451becacef408b19"} err="failed to get container status \"04b9b25d0c5ac65e7e0540c31539d0fb64bc82aa36c7d0a2451becacef408b19\": rpc error: code = NotFound desc = could not find container \"04b9b25d0c5ac65e7e0540c31539d0fb64bc82aa36c7d0a2451becacef408b19\": container with ID starting with 04b9b25d0c5ac65e7e0540c31539d0fb64bc82aa36c7d0a2451becacef408b19 not found: ID does not exist" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.446907 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 11:58:15 crc kubenswrapper[4706]: E1125 11:58:15.447448 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12913eec-2986-42df-b213-a4466df3001a" containerName="nova-metadata-log" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.447474 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="12913eec-2986-42df-b213-a4466df3001a" containerName="nova-metadata-log" Nov 25 11:58:15 crc kubenswrapper[4706]: E1125 11:58:15.447503 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12913eec-2986-42df-b213-a4466df3001a" containerName="nova-metadata-metadata" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.447514 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="12913eec-2986-42df-b213-a4466df3001a" containerName="nova-metadata-metadata" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.447795 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="12913eec-2986-42df-b213-a4466df3001a" containerName="nova-metadata-metadata" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.447820 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="12913eec-2986-42df-b213-a4466df3001a" containerName="nova-metadata-log" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.448985 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.454267 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.454412 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.463459 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.651083 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rlrn\" (UniqueName: \"kubernetes.io/projected/4f4468bc-ad45-4f59-8911-b4fc57f942d3-kube-api-access-7rlrn\") pod \"nova-metadata-0\" (UID: \"4f4468bc-ad45-4f59-8911-b4fc57f942d3\") " pod="openstack/nova-metadata-0" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.651132 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f4468bc-ad45-4f59-8911-b4fc57f942d3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4f4468bc-ad45-4f59-8911-b4fc57f942d3\") " pod="openstack/nova-metadata-0" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.651316 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f4468bc-ad45-4f59-8911-b4fc57f942d3-config-data\") pod \"nova-metadata-0\" (UID: \"4f4468bc-ad45-4f59-8911-b4fc57f942d3\") " pod="openstack/nova-metadata-0" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.651436 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f4468bc-ad45-4f59-8911-b4fc57f942d3-logs\") pod \"nova-metadata-0\" (UID: \"4f4468bc-ad45-4f59-8911-b4fc57f942d3\") " pod="openstack/nova-metadata-0" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.651599 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f4468bc-ad45-4f59-8911-b4fc57f942d3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4f4468bc-ad45-4f59-8911-b4fc57f942d3\") " pod="openstack/nova-metadata-0" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.753903 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rlrn\" (UniqueName: \"kubernetes.io/projected/4f4468bc-ad45-4f59-8911-b4fc57f942d3-kube-api-access-7rlrn\") pod \"nova-metadata-0\" (UID: \"4f4468bc-ad45-4f59-8911-b4fc57f942d3\") " pod="openstack/nova-metadata-0" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.753966 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f4468bc-ad45-4f59-8911-b4fc57f942d3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4f4468bc-ad45-4f59-8911-b4fc57f942d3\") " pod="openstack/nova-metadata-0" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.754031 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f4468bc-ad45-4f59-8911-b4fc57f942d3-config-data\") pod \"nova-metadata-0\" (UID: \"4f4468bc-ad45-4f59-8911-b4fc57f942d3\") " pod="openstack/nova-metadata-0" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.754073 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f4468bc-ad45-4f59-8911-b4fc57f942d3-logs\") pod \"nova-metadata-0\" (UID: \"4f4468bc-ad45-4f59-8911-b4fc57f942d3\") " pod="openstack/nova-metadata-0" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.754142 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f4468bc-ad45-4f59-8911-b4fc57f942d3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4f4468bc-ad45-4f59-8911-b4fc57f942d3\") " pod="openstack/nova-metadata-0" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.754804 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f4468bc-ad45-4f59-8911-b4fc57f942d3-logs\") pod \"nova-metadata-0\" (UID: \"4f4468bc-ad45-4f59-8911-b4fc57f942d3\") " pod="openstack/nova-metadata-0" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.759443 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f4468bc-ad45-4f59-8911-b4fc57f942d3-config-data\") pod \"nova-metadata-0\" (UID: \"4f4468bc-ad45-4f59-8911-b4fc57f942d3\") " pod="openstack/nova-metadata-0" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.759289 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f4468bc-ad45-4f59-8911-b4fc57f942d3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4f4468bc-ad45-4f59-8911-b4fc57f942d3\") " pod="openstack/nova-metadata-0" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.761959 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f4468bc-ad45-4f59-8911-b4fc57f942d3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4f4468bc-ad45-4f59-8911-b4fc57f942d3\") " pod="openstack/nova-metadata-0" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.774085 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rlrn\" (UniqueName: \"kubernetes.io/projected/4f4468bc-ad45-4f59-8911-b4fc57f942d3-kube-api-access-7rlrn\") pod \"nova-metadata-0\" (UID: \"4f4468bc-ad45-4f59-8911-b4fc57f942d3\") " pod="openstack/nova-metadata-0" Nov 25 11:58:15 crc kubenswrapper[4706]: I1125 11:58:15.934131 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12913eec-2986-42df-b213-a4466df3001a" path="/var/lib/kubelet/pods/12913eec-2986-42df-b213-a4466df3001a/volumes" Nov 25 11:58:16 crc kubenswrapper[4706]: I1125 11:58:16.067888 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 11:58:16 crc kubenswrapper[4706]: I1125 11:58:16.569146 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 11:58:16 crc kubenswrapper[4706]: I1125 11:58:16.570634 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="36bf3efe-847b-4896-878f-1f06e582bf01" containerName="kube-state-metrics" containerID="cri-o://e3a2aec33179eda68bbe52b4ebd5be3cb84488f80e0c9546e1dbb54750bc1521" gracePeriod=30 Nov 25 11:58:16 crc kubenswrapper[4706]: I1125 11:58:16.596888 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 11:58:16 crc kubenswrapper[4706]: W1125 11:58:16.646995 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f4468bc_ad45_4f59_8911_b4fc57f942d3.slice/crio-7026a4f851fc8a0df9d2e39c36b952493c64f2eb0aca8563040947c91b31c524 WatchSource:0}: Error finding container 7026a4f851fc8a0df9d2e39c36b952493c64f2eb0aca8563040947c91b31c524: Status 404 returned error can't find the container with id 7026a4f851fc8a0df9d2e39c36b952493c64f2eb0aca8563040947c91b31c524 Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.067937 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.220601 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znrhd\" (UniqueName: \"kubernetes.io/projected/36bf3efe-847b-4896-878f-1f06e582bf01-kube-api-access-znrhd\") pod \"36bf3efe-847b-4896-878f-1f06e582bf01\" (UID: \"36bf3efe-847b-4896-878f-1f06e582bf01\") " Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.224279 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36bf3efe-847b-4896-878f-1f06e582bf01-kube-api-access-znrhd" (OuterVolumeSpecName: "kube-api-access-znrhd") pod "36bf3efe-847b-4896-878f-1f06e582bf01" (UID: "36bf3efe-847b-4896-878f-1f06e582bf01"). InnerVolumeSpecName "kube-api-access-znrhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.322911 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znrhd\" (UniqueName: \"kubernetes.io/projected/36bf3efe-847b-4896-878f-1f06e582bf01-kube-api-access-znrhd\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.400015 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4f4468bc-ad45-4f59-8911-b4fc57f942d3","Type":"ContainerStarted","Data":"cab34efea932a65c680b98a6e984792d8740bde207155d06296875f67e17d10a"} Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.400083 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4f4468bc-ad45-4f59-8911-b4fc57f942d3","Type":"ContainerStarted","Data":"624d229f27af565807b3463cc2c8ccd6f46422b6125fec91409e8b571d65a8ab"} Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.400105 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4f4468bc-ad45-4f59-8911-b4fc57f942d3","Type":"ContainerStarted","Data":"7026a4f851fc8a0df9d2e39c36b952493c64f2eb0aca8563040947c91b31c524"} Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.402246 4706 generic.go:334] "Generic (PLEG): container finished" podID="36bf3efe-847b-4896-878f-1f06e582bf01" containerID="e3a2aec33179eda68bbe52b4ebd5be3cb84488f80e0c9546e1dbb54750bc1521" exitCode=2 Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.402352 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.403023 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"36bf3efe-847b-4896-878f-1f06e582bf01","Type":"ContainerDied","Data":"e3a2aec33179eda68bbe52b4ebd5be3cb84488f80e0c9546e1dbb54750bc1521"} Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.403073 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"36bf3efe-847b-4896-878f-1f06e582bf01","Type":"ContainerDied","Data":"9969127691be8ba0b6f14ea55005e7b6663f2b9d0e14d10df92856e820083c36"} Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.403089 4706 scope.go:117] "RemoveContainer" containerID="e3a2aec33179eda68bbe52b4ebd5be3cb84488f80e0c9546e1dbb54750bc1521" Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.405615 4706 generic.go:334] "Generic (PLEG): container finished" podID="b100f787-7064-4cac-b5dc-0267ee51f1aa" containerID="3a709eace25238e86be74d86326ea1f6b1bf19eb76991c148775350e05599dbd" exitCode=0 Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.405662 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-cdzkl" event={"ID":"b100f787-7064-4cac-b5dc-0267ee51f1aa","Type":"ContainerDied","Data":"3a709eace25238e86be74d86326ea1f6b1bf19eb76991c148775350e05599dbd"} Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.421292 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.421275738 podStartE2EDuration="2.421275738s" podCreationTimestamp="2025-11-25 11:58:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:58:17.41899629 +0000 UTC m=+1306.333553691" watchObservedRunningTime="2025-11-25 11:58:17.421275738 +0000 UTC m=+1306.335833119" Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.429324 4706 scope.go:117] "RemoveContainer" containerID="e3a2aec33179eda68bbe52b4ebd5be3cb84488f80e0c9546e1dbb54750bc1521" Nov 25 11:58:17 crc kubenswrapper[4706]: E1125 11:58:17.429850 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3a2aec33179eda68bbe52b4ebd5be3cb84488f80e0c9546e1dbb54750bc1521\": container with ID starting with e3a2aec33179eda68bbe52b4ebd5be3cb84488f80e0c9546e1dbb54750bc1521 not found: ID does not exist" containerID="e3a2aec33179eda68bbe52b4ebd5be3cb84488f80e0c9546e1dbb54750bc1521" Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.429893 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3a2aec33179eda68bbe52b4ebd5be3cb84488f80e0c9546e1dbb54750bc1521"} err="failed to get container status \"e3a2aec33179eda68bbe52b4ebd5be3cb84488f80e0c9546e1dbb54750bc1521\": rpc error: code = NotFound desc = could not find container \"e3a2aec33179eda68bbe52b4ebd5be3cb84488f80e0c9546e1dbb54750bc1521\": container with ID starting with e3a2aec33179eda68bbe52b4ebd5be3cb84488f80e0c9546e1dbb54750bc1521 not found: ID does not exist" Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.437923 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.450351 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.470631 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 11:58:17 crc kubenswrapper[4706]: E1125 11:58:17.471189 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36bf3efe-847b-4896-878f-1f06e582bf01" containerName="kube-state-metrics" Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.471216 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="36bf3efe-847b-4896-878f-1f06e582bf01" containerName="kube-state-metrics" Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.471471 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="36bf3efe-847b-4896-878f-1f06e582bf01" containerName="kube-state-metrics" Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.472185 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.474211 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.474568 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.477822 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.628577 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phqgs\" (UniqueName: \"kubernetes.io/projected/04e7a5d0-b5fe-4a58-b015-339cc1218c6e-kube-api-access-phqgs\") pod \"kube-state-metrics-0\" (UID: \"04e7a5d0-b5fe-4a58-b015-339cc1218c6e\") " pod="openstack/kube-state-metrics-0" Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.628774 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04e7a5d0-b5fe-4a58-b015-339cc1218c6e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"04e7a5d0-b5fe-4a58-b015-339cc1218c6e\") " pod="openstack/kube-state-metrics-0" Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.628807 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/04e7a5d0-b5fe-4a58-b015-339cc1218c6e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"04e7a5d0-b5fe-4a58-b015-339cc1218c6e\") " pod="openstack/kube-state-metrics-0" Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.629096 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/04e7a5d0-b5fe-4a58-b015-339cc1218c6e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"04e7a5d0-b5fe-4a58-b015-339cc1218c6e\") " pod="openstack/kube-state-metrics-0" Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.731283 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04e7a5d0-b5fe-4a58-b015-339cc1218c6e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"04e7a5d0-b5fe-4a58-b015-339cc1218c6e\") " pod="openstack/kube-state-metrics-0" Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.731343 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/04e7a5d0-b5fe-4a58-b015-339cc1218c6e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"04e7a5d0-b5fe-4a58-b015-339cc1218c6e\") " pod="openstack/kube-state-metrics-0" Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.731409 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/04e7a5d0-b5fe-4a58-b015-339cc1218c6e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"04e7a5d0-b5fe-4a58-b015-339cc1218c6e\") " pod="openstack/kube-state-metrics-0" Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.731457 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phqgs\" (UniqueName: \"kubernetes.io/projected/04e7a5d0-b5fe-4a58-b015-339cc1218c6e-kube-api-access-phqgs\") pod \"kube-state-metrics-0\" (UID: \"04e7a5d0-b5fe-4a58-b015-339cc1218c6e\") " pod="openstack/kube-state-metrics-0" Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.736641 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/04e7a5d0-b5fe-4a58-b015-339cc1218c6e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"04e7a5d0-b5fe-4a58-b015-339cc1218c6e\") " pod="openstack/kube-state-metrics-0" Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.736918 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/04e7a5d0-b5fe-4a58-b015-339cc1218c6e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"04e7a5d0-b5fe-4a58-b015-339cc1218c6e\") " pod="openstack/kube-state-metrics-0" Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.737541 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04e7a5d0-b5fe-4a58-b015-339cc1218c6e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"04e7a5d0-b5fe-4a58-b015-339cc1218c6e\") " pod="openstack/kube-state-metrics-0" Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.751477 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phqgs\" (UniqueName: \"kubernetes.io/projected/04e7a5d0-b5fe-4a58-b015-339cc1218c6e-kube-api-access-phqgs\") pod \"kube-state-metrics-0\" (UID: \"04e7a5d0-b5fe-4a58-b015-339cc1218c6e\") " pod="openstack/kube-state-metrics-0" Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.844337 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 11:58:17 crc kubenswrapper[4706]: I1125 11:58:17.941559 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36bf3efe-847b-4896-878f-1f06e582bf01" path="/var/lib/kubelet/pods/36bf3efe-847b-4896-878f-1f06e582bf01/volumes" Nov 25 11:58:18 crc kubenswrapper[4706]: I1125 11:58:18.299144 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 11:58:18 crc kubenswrapper[4706]: W1125 11:58:18.306489 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04e7a5d0_b5fe_4a58_b015_339cc1218c6e.slice/crio-0baffb565d9d50be808aa1ed5034d4fe13109159d37353886291190ebf008dd7 WatchSource:0}: Error finding container 0baffb565d9d50be808aa1ed5034d4fe13109159d37353886291190ebf008dd7: Status 404 returned error can't find the container with id 0baffb565d9d50be808aa1ed5034d4fe13109159d37353886291190ebf008dd7 Nov 25 11:58:18 crc kubenswrapper[4706]: I1125 11:58:18.416780 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"04e7a5d0-b5fe-4a58-b015-339cc1218c6e","Type":"ContainerStarted","Data":"0baffb565d9d50be808aa1ed5034d4fe13109159d37353886291190ebf008dd7"} Nov 25 11:58:18 crc kubenswrapper[4706]: I1125 11:58:18.418931 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:58:18 crc kubenswrapper[4706]: I1125 11:58:18.419437 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f" containerName="ceilometer-central-agent" containerID="cri-o://f3f8cd889caa95db731df251888a7c1a3ce9d080796aa96191596b79dd853b9b" gracePeriod=30 Nov 25 11:58:18 crc kubenswrapper[4706]: I1125 11:58:18.420117 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f" containerName="proxy-httpd" containerID="cri-o://67864c33547591b87be529165564a21dc3207d413ee9736f09fce07b61e0f127" gracePeriod=30 Nov 25 11:58:18 crc kubenswrapper[4706]: I1125 11:58:18.420264 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f" containerName="sg-core" containerID="cri-o://53ab2df770b270d546ef9e435e3a0f4ec580df8b785873c38f798f12f2668394" gracePeriod=30 Nov 25 11:58:18 crc kubenswrapper[4706]: I1125 11:58:18.420405 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f" containerName="ceilometer-notification-agent" containerID="cri-o://9d0124bcc1ee48b4329bb8703782a460504d628f4b5406382971aded6556e60a" gracePeriod=30 Nov 25 11:58:18 crc kubenswrapper[4706]: I1125 11:58:18.436419 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 25 11:58:18 crc kubenswrapper[4706]: I1125 11:58:18.537590 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 11:58:18 crc kubenswrapper[4706]: I1125 11:58:18.537639 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 11:58:18 crc kubenswrapper[4706]: I1125 11:58:18.791876 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-cdzkl" Nov 25 11:58:18 crc kubenswrapper[4706]: I1125 11:58:18.852458 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b100f787-7064-4cac-b5dc-0267ee51f1aa-scripts\") pod \"b100f787-7064-4cac-b5dc-0267ee51f1aa\" (UID: \"b100f787-7064-4cac-b5dc-0267ee51f1aa\") " Nov 25 11:58:18 crc kubenswrapper[4706]: I1125 11:58:18.852872 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b100f787-7064-4cac-b5dc-0267ee51f1aa-combined-ca-bundle\") pod \"b100f787-7064-4cac-b5dc-0267ee51f1aa\" (UID: \"b100f787-7064-4cac-b5dc-0267ee51f1aa\") " Nov 25 11:58:18 crc kubenswrapper[4706]: I1125 11:58:18.853350 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbzf9\" (UniqueName: \"kubernetes.io/projected/b100f787-7064-4cac-b5dc-0267ee51f1aa-kube-api-access-sbzf9\") pod \"b100f787-7064-4cac-b5dc-0267ee51f1aa\" (UID: \"b100f787-7064-4cac-b5dc-0267ee51f1aa\") " Nov 25 11:58:18 crc kubenswrapper[4706]: I1125 11:58:18.853431 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b100f787-7064-4cac-b5dc-0267ee51f1aa-config-data\") pod \"b100f787-7064-4cac-b5dc-0267ee51f1aa\" (UID: \"b100f787-7064-4cac-b5dc-0267ee51f1aa\") " Nov 25 11:58:18 crc kubenswrapper[4706]: I1125 11:58:18.861422 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b100f787-7064-4cac-b5dc-0267ee51f1aa-scripts" (OuterVolumeSpecName: "scripts") pod "b100f787-7064-4cac-b5dc-0267ee51f1aa" (UID: "b100f787-7064-4cac-b5dc-0267ee51f1aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:58:18 crc kubenswrapper[4706]: I1125 11:58:18.861783 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b100f787-7064-4cac-b5dc-0267ee51f1aa-kube-api-access-sbzf9" (OuterVolumeSpecName: "kube-api-access-sbzf9") pod "b100f787-7064-4cac-b5dc-0267ee51f1aa" (UID: "b100f787-7064-4cac-b5dc-0267ee51f1aa"). InnerVolumeSpecName "kube-api-access-sbzf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:58:18 crc kubenswrapper[4706]: I1125 11:58:18.888753 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 25 11:58:18 crc kubenswrapper[4706]: I1125 11:58:18.892812 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 25 11:58:18 crc kubenswrapper[4706]: I1125 11:58:18.896905 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b100f787-7064-4cac-b5dc-0267ee51f1aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b100f787-7064-4cac-b5dc-0267ee51f1aa" (UID: "b100f787-7064-4cac-b5dc-0267ee51f1aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:58:18 crc kubenswrapper[4706]: I1125 11:58:18.903460 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b100f787-7064-4cac-b5dc-0267ee51f1aa-config-data" (OuterVolumeSpecName: "config-data") pod "b100f787-7064-4cac-b5dc-0267ee51f1aa" (UID: "b100f787-7064-4cac-b5dc-0267ee51f1aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:58:18 crc kubenswrapper[4706]: I1125 11:58:18.904467 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-sdx7j" Nov 25 11:58:18 crc kubenswrapper[4706]: I1125 11:58:18.956504 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbzf9\" (UniqueName: \"kubernetes.io/projected/b100f787-7064-4cac-b5dc-0267ee51f1aa-kube-api-access-sbzf9\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:18 crc kubenswrapper[4706]: I1125 11:58:18.956548 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b100f787-7064-4cac-b5dc-0267ee51f1aa-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:18 crc kubenswrapper[4706]: I1125 11:58:18.956561 4706 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b100f787-7064-4cac-b5dc-0267ee51f1aa-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:18 crc kubenswrapper[4706]: I1125 11:58:18.956576 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b100f787-7064-4cac-b5dc-0267ee51f1aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:18 crc kubenswrapper[4706]: I1125 11:58:18.978638 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 25 11:58:18 crc kubenswrapper[4706]: I1125 11:58:18.989681 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-kv96j"] Nov 25 11:58:18 crc kubenswrapper[4706]: I1125 11:58:18.989889 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-kv96j" podUID="9d560e53-d5ef-4b6b-af31-d1b5856dbf47" containerName="dnsmasq-dns" containerID="cri-o://7f2e50c7556c207faec757081b15999603dd75cd8b3f0374eb95524e497fdc26" gracePeriod=10 Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.421606 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-kv96j" Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.429326 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-cdzkl" event={"ID":"b100f787-7064-4cac-b5dc-0267ee51f1aa","Type":"ContainerDied","Data":"b70097cf36759439d3ec9656d7bbbb87f9553959adf613fbc9857718ec80cdfb"} Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.429372 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b70097cf36759439d3ec9656d7bbbb87f9553959adf613fbc9857718ec80cdfb" Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.429428 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-cdzkl" Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.433205 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"04e7a5d0-b5fe-4a58-b015-339cc1218c6e","Type":"ContainerStarted","Data":"a43b93079f480147c92a5dbde6cde7fc167fb5a7be0101bce13f968d8af9b936"} Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.433986 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.440581 4706 generic.go:334] "Generic (PLEG): container finished" podID="9d560e53-d5ef-4b6b-af31-d1b5856dbf47" containerID="7f2e50c7556c207faec757081b15999603dd75cd8b3f0374eb95524e497fdc26" exitCode=0 Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.440686 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-kv96j" event={"ID":"9d560e53-d5ef-4b6b-af31-d1b5856dbf47","Type":"ContainerDied","Data":"7f2e50c7556c207faec757081b15999603dd75cd8b3f0374eb95524e497fdc26"} Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.440713 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-kv96j" event={"ID":"9d560e53-d5ef-4b6b-af31-d1b5856dbf47","Type":"ContainerDied","Data":"4dcce9be8e09ecea236f24ee7576fed32a1655d2b9e2046d7ffd735c91b0e3a8"} Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.440731 4706 scope.go:117] "RemoveContainer" containerID="7f2e50c7556c207faec757081b15999603dd75cd8b3f0374eb95524e497fdc26" Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.441072 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-kv96j" Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.463779 4706 generic.go:334] "Generic (PLEG): container finished" podID="3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f" containerID="67864c33547591b87be529165564a21dc3207d413ee9736f09fce07b61e0f127" exitCode=0 Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.463821 4706 generic.go:334] "Generic (PLEG): container finished" podID="3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f" containerID="53ab2df770b270d546ef9e435e3a0f4ec580df8b785873c38f798f12f2668394" exitCode=2 Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.463831 4706 generic.go:334] "Generic (PLEG): container finished" podID="3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f" containerID="f3f8cd889caa95db731df251888a7c1a3ce9d080796aa96191596b79dd853b9b" exitCode=0 Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.465654 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f","Type":"ContainerDied","Data":"67864c33547591b87be529165564a21dc3207d413ee9736f09fce07b61e0f127"} Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.465694 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f","Type":"ContainerDied","Data":"53ab2df770b270d546ef9e435e3a0f4ec580df8b785873c38f798f12f2668394"} Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.465709 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f","Type":"ContainerDied","Data":"f3f8cd889caa95db731df251888a7c1a3ce9d080796aa96191596b79dd853b9b"} Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.493124 4706 scope.go:117] "RemoveContainer" containerID="e698be1e556a47e20b0e5192bfed96ae46f7943e750ec588dbcc95dab5a6675f" Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.500987 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.515767 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.016161841 podStartE2EDuration="2.515742231s" podCreationTimestamp="2025-11-25 11:58:17 +0000 UTC" firstStartedPulling="2025-11-25 11:58:18.308623767 +0000 UTC m=+1307.223181148" lastFinishedPulling="2025-11-25 11:58:18.808204157 +0000 UTC m=+1307.722761538" observedRunningTime="2025-11-25 11:58:19.471718181 +0000 UTC m=+1308.386275572" watchObservedRunningTime="2025-11-25 11:58:19.515742231 +0000 UTC m=+1308.430299622" Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.543046 4706 scope.go:117] "RemoveContainer" containerID="7f2e50c7556c207faec757081b15999603dd75cd8b3f0374eb95524e497fdc26" Nov 25 11:58:19 crc kubenswrapper[4706]: E1125 11:58:19.543996 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f2e50c7556c207faec757081b15999603dd75cd8b3f0374eb95524e497fdc26\": container with ID starting with 7f2e50c7556c207faec757081b15999603dd75cd8b3f0374eb95524e497fdc26 not found: ID does not exist" containerID="7f2e50c7556c207faec757081b15999603dd75cd8b3f0374eb95524e497fdc26" Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.544106 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f2e50c7556c207faec757081b15999603dd75cd8b3f0374eb95524e497fdc26"} err="failed to get container status \"7f2e50c7556c207faec757081b15999603dd75cd8b3f0374eb95524e497fdc26\": rpc error: code = NotFound desc = could not find container \"7f2e50c7556c207faec757081b15999603dd75cd8b3f0374eb95524e497fdc26\": container with ID starting with 7f2e50c7556c207faec757081b15999603dd75cd8b3f0374eb95524e497fdc26 not found: ID does not exist" Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.544162 4706 scope.go:117] "RemoveContainer" containerID="e698be1e556a47e20b0e5192bfed96ae46f7943e750ec588dbcc95dab5a6675f" Nov 25 11:58:19 crc kubenswrapper[4706]: E1125 11:58:19.548468 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e698be1e556a47e20b0e5192bfed96ae46f7943e750ec588dbcc95dab5a6675f\": container with ID starting with e698be1e556a47e20b0e5192bfed96ae46f7943e750ec588dbcc95dab5a6675f not found: ID does not exist" containerID="e698be1e556a47e20b0e5192bfed96ae46f7943e750ec588dbcc95dab5a6675f" Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.548531 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e698be1e556a47e20b0e5192bfed96ae46f7943e750ec588dbcc95dab5a6675f"} err="failed to get container status \"e698be1e556a47e20b0e5192bfed96ae46f7943e750ec588dbcc95dab5a6675f\": rpc error: code = NotFound desc = could not find container \"e698be1e556a47e20b0e5192bfed96ae46f7943e750ec588dbcc95dab5a6675f\": container with ID starting with e698be1e556a47e20b0e5192bfed96ae46f7943e750ec588dbcc95dab5a6675f not found: ID does not exist" Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.584956 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d560e53-d5ef-4b6b-af31-d1b5856dbf47-ovsdbserver-sb\") pod \"9d560e53-d5ef-4b6b-af31-d1b5856dbf47\" (UID: \"9d560e53-d5ef-4b6b-af31-d1b5856dbf47\") " Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.584995 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d560e53-d5ef-4b6b-af31-d1b5856dbf47-dns-svc\") pod \"9d560e53-d5ef-4b6b-af31-d1b5856dbf47\" (UID: \"9d560e53-d5ef-4b6b-af31-d1b5856dbf47\") " Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.585046 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d560e53-d5ef-4b6b-af31-d1b5856dbf47-config\") pod \"9d560e53-d5ef-4b6b-af31-d1b5856dbf47\" (UID: \"9d560e53-d5ef-4b6b-af31-d1b5856dbf47\") " Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.585154 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d560e53-d5ef-4b6b-af31-d1b5856dbf47-dns-swift-storage-0\") pod \"9d560e53-d5ef-4b6b-af31-d1b5856dbf47\" (UID: \"9d560e53-d5ef-4b6b-af31-d1b5856dbf47\") " Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.585226 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn8xf\" (UniqueName: \"kubernetes.io/projected/9d560e53-d5ef-4b6b-af31-d1b5856dbf47-kube-api-access-hn8xf\") pod \"9d560e53-d5ef-4b6b-af31-d1b5856dbf47\" (UID: \"9d560e53-d5ef-4b6b-af31-d1b5856dbf47\") " Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.585274 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d560e53-d5ef-4b6b-af31-d1b5856dbf47-ovsdbserver-nb\") pod \"9d560e53-d5ef-4b6b-af31-d1b5856dbf47\" (UID: \"9d560e53-d5ef-4b6b-af31-d1b5856dbf47\") " Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.588044 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="62968efd-c3bc-4ccb-892f-b1479a5da4cc" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.588343 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="62968efd-c3bc-4ccb-892f-b1479a5da4cc" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.593223 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d560e53-d5ef-4b6b-af31-d1b5856dbf47-kube-api-access-hn8xf" (OuterVolumeSpecName: "kube-api-access-hn8xf") pod "9d560e53-d5ef-4b6b-af31-d1b5856dbf47" (UID: "9d560e53-d5ef-4b6b-af31-d1b5856dbf47"). InnerVolumeSpecName "kube-api-access-hn8xf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.623445 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.623712 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="62968efd-c3bc-4ccb-892f-b1479a5da4cc" containerName="nova-api-log" containerID="cri-o://a2e980f8ad229edb2c569d7035e08209d34cd0fa079ca7c46fdfe3210380545f" gracePeriod=30 Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.624128 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="62968efd-c3bc-4ccb-892f-b1479a5da4cc" containerName="nova-api-api" containerID="cri-o://7446f31b337cd4625add204856cf1a631ec9341af4fe1f59547a39610254999f" gracePeriod=30 Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.650853 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.651070 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4f4468bc-ad45-4f59-8911-b4fc57f942d3" containerName="nova-metadata-log" containerID="cri-o://624d229f27af565807b3463cc2c8ccd6f46422b6125fec91409e8b571d65a8ab" gracePeriod=30 Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.651530 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4f4468bc-ad45-4f59-8911-b4fc57f942d3" containerName="nova-metadata-metadata" containerID="cri-o://cab34efea932a65c680b98a6e984792d8740bde207155d06296875f67e17d10a" gracePeriod=30 Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.677888 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d560e53-d5ef-4b6b-af31-d1b5856dbf47-config" (OuterVolumeSpecName: "config") pod "9d560e53-d5ef-4b6b-af31-d1b5856dbf47" (UID: "9d560e53-d5ef-4b6b-af31-d1b5856dbf47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.687886 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hn8xf\" (UniqueName: \"kubernetes.io/projected/9d560e53-d5ef-4b6b-af31-d1b5856dbf47-kube-api-access-hn8xf\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.687913 4706 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d560e53-d5ef-4b6b-af31-d1b5856dbf47-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.694875 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d560e53-d5ef-4b6b-af31-d1b5856dbf47-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9d560e53-d5ef-4b6b-af31-d1b5856dbf47" (UID: "9d560e53-d5ef-4b6b-af31-d1b5856dbf47"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.706687 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d560e53-d5ef-4b6b-af31-d1b5856dbf47-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9d560e53-d5ef-4b6b-af31-d1b5856dbf47" (UID: "9d560e53-d5ef-4b6b-af31-d1b5856dbf47"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.710564 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d560e53-d5ef-4b6b-af31-d1b5856dbf47-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9d560e53-d5ef-4b6b-af31-d1b5856dbf47" (UID: "9d560e53-d5ef-4b6b-af31-d1b5856dbf47"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.717051 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d560e53-d5ef-4b6b-af31-d1b5856dbf47-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9d560e53-d5ef-4b6b-af31-d1b5856dbf47" (UID: "9d560e53-d5ef-4b6b-af31-d1b5856dbf47"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.789501 4706 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d560e53-d5ef-4b6b-af31-d1b5856dbf47-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.789803 4706 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d560e53-d5ef-4b6b-af31-d1b5856dbf47-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.789816 4706 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d560e53-d5ef-4b6b-af31-d1b5856dbf47-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:19 crc kubenswrapper[4706]: I1125 11:58:19.789828 4706 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d560e53-d5ef-4b6b-af31-d1b5856dbf47-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.033476 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.082705 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-kv96j"] Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.090494 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-kv96j"] Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.257717 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.404331 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f4468bc-ad45-4f59-8911-b4fc57f942d3-nova-metadata-tls-certs\") pod \"4f4468bc-ad45-4f59-8911-b4fc57f942d3\" (UID: \"4f4468bc-ad45-4f59-8911-b4fc57f942d3\") " Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.404482 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f4468bc-ad45-4f59-8911-b4fc57f942d3-config-data\") pod \"4f4468bc-ad45-4f59-8911-b4fc57f942d3\" (UID: \"4f4468bc-ad45-4f59-8911-b4fc57f942d3\") " Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.404521 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f4468bc-ad45-4f59-8911-b4fc57f942d3-combined-ca-bundle\") pod \"4f4468bc-ad45-4f59-8911-b4fc57f942d3\" (UID: \"4f4468bc-ad45-4f59-8911-b4fc57f942d3\") " Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.404559 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f4468bc-ad45-4f59-8911-b4fc57f942d3-logs\") pod \"4f4468bc-ad45-4f59-8911-b4fc57f942d3\" (UID: \"4f4468bc-ad45-4f59-8911-b4fc57f942d3\") " Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.404707 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rlrn\" (UniqueName: \"kubernetes.io/projected/4f4468bc-ad45-4f59-8911-b4fc57f942d3-kube-api-access-7rlrn\") pod \"4f4468bc-ad45-4f59-8911-b4fc57f942d3\" (UID: \"4f4468bc-ad45-4f59-8911-b4fc57f942d3\") " Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.406085 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f4468bc-ad45-4f59-8911-b4fc57f942d3-logs" (OuterVolumeSpecName: "logs") pod "4f4468bc-ad45-4f59-8911-b4fc57f942d3" (UID: "4f4468bc-ad45-4f59-8911-b4fc57f942d3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.409205 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f4468bc-ad45-4f59-8911-b4fc57f942d3-kube-api-access-7rlrn" (OuterVolumeSpecName: "kube-api-access-7rlrn") pod "4f4468bc-ad45-4f59-8911-b4fc57f942d3" (UID: "4f4468bc-ad45-4f59-8911-b4fc57f942d3"). InnerVolumeSpecName "kube-api-access-7rlrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.433567 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f4468bc-ad45-4f59-8911-b4fc57f942d3-config-data" (OuterVolumeSpecName: "config-data") pod "4f4468bc-ad45-4f59-8911-b4fc57f942d3" (UID: "4f4468bc-ad45-4f59-8911-b4fc57f942d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.439520 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f4468bc-ad45-4f59-8911-b4fc57f942d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f4468bc-ad45-4f59-8911-b4fc57f942d3" (UID: "4f4468bc-ad45-4f59-8911-b4fc57f942d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.474189 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f4468bc-ad45-4f59-8911-b4fc57f942d3-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "4f4468bc-ad45-4f59-8911-b4fc57f942d3" (UID: "4f4468bc-ad45-4f59-8911-b4fc57f942d3"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.474313 4706 generic.go:334] "Generic (PLEG): container finished" podID="4f4468bc-ad45-4f59-8911-b4fc57f942d3" containerID="cab34efea932a65c680b98a6e984792d8740bde207155d06296875f67e17d10a" exitCode=0 Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.474349 4706 generic.go:334] "Generic (PLEG): container finished" podID="4f4468bc-ad45-4f59-8911-b4fc57f942d3" containerID="624d229f27af565807b3463cc2c8ccd6f46422b6125fec91409e8b571d65a8ab" exitCode=143 Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.474406 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4f4468bc-ad45-4f59-8911-b4fc57f942d3","Type":"ContainerDied","Data":"cab34efea932a65c680b98a6e984792d8740bde207155d06296875f67e17d10a"} Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.474439 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4f4468bc-ad45-4f59-8911-b4fc57f942d3","Type":"ContainerDied","Data":"624d229f27af565807b3463cc2c8ccd6f46422b6125fec91409e8b571d65a8ab"} Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.474453 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4f4468bc-ad45-4f59-8911-b4fc57f942d3","Type":"ContainerDied","Data":"7026a4f851fc8a0df9d2e39c36b952493c64f2eb0aca8563040947c91b31c524"} Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.474471 4706 scope.go:117] "RemoveContainer" containerID="cab34efea932a65c680b98a6e984792d8740bde207155d06296875f67e17d10a" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.474471 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.476605 4706 generic.go:334] "Generic (PLEG): container finished" podID="ca66dab3-01b2-4fac-b6c9-c09b2704a670" containerID="9b1df5c4ecad9cb3a75eba378c36c215fa265f87ab49ad1f7014d8f2630e77ff" exitCode=0 Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.476659 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-87sfg" event={"ID":"ca66dab3-01b2-4fac-b6c9-c09b2704a670","Type":"ContainerDied","Data":"9b1df5c4ecad9cb3a75eba378c36c215fa265f87ab49ad1f7014d8f2630e77ff"} Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.479891 4706 generic.go:334] "Generic (PLEG): container finished" podID="3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f" containerID="9d0124bcc1ee48b4329bb8703782a460504d628f4b5406382971aded6556e60a" exitCode=0 Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.480009 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f","Type":"ContainerDied","Data":"9d0124bcc1ee48b4329bb8703782a460504d628f4b5406382971aded6556e60a"} Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.485809 4706 generic.go:334] "Generic (PLEG): container finished" podID="62968efd-c3bc-4ccb-892f-b1479a5da4cc" containerID="a2e980f8ad229edb2c569d7035e08209d34cd0fa079ca7c46fdfe3210380545f" exitCode=143 Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.486068 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"62968efd-c3bc-4ccb-892f-b1479a5da4cc","Type":"ContainerDied","Data":"a2e980f8ad229edb2c569d7035e08209d34cd0fa079ca7c46fdfe3210380545f"} Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.503087 4706 scope.go:117] "RemoveContainer" containerID="624d229f27af565807b3463cc2c8ccd6f46422b6125fec91409e8b571d65a8ab" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.509033 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rlrn\" (UniqueName: \"kubernetes.io/projected/4f4468bc-ad45-4f59-8911-b4fc57f942d3-kube-api-access-7rlrn\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.512494 4706 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f4468bc-ad45-4f59-8911-b4fc57f942d3-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.512545 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f4468bc-ad45-4f59-8911-b4fc57f942d3-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.512560 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f4468bc-ad45-4f59-8911-b4fc57f942d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.512577 4706 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f4468bc-ad45-4f59-8911-b4fc57f942d3-logs\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.512613 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.519650 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.536997 4706 scope.go:117] "RemoveContainer" containerID="cab34efea932a65c680b98a6e984792d8740bde207155d06296875f67e17d10a" Nov 25 11:58:20 crc kubenswrapper[4706]: E1125 11:58:20.537564 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cab34efea932a65c680b98a6e984792d8740bde207155d06296875f67e17d10a\": container with ID starting with cab34efea932a65c680b98a6e984792d8740bde207155d06296875f67e17d10a not found: ID does not exist" containerID="cab34efea932a65c680b98a6e984792d8740bde207155d06296875f67e17d10a" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.537602 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cab34efea932a65c680b98a6e984792d8740bde207155d06296875f67e17d10a"} err="failed to get container status \"cab34efea932a65c680b98a6e984792d8740bde207155d06296875f67e17d10a\": rpc error: code = NotFound desc = could not find container \"cab34efea932a65c680b98a6e984792d8740bde207155d06296875f67e17d10a\": container with ID starting with cab34efea932a65c680b98a6e984792d8740bde207155d06296875f67e17d10a not found: ID does not exist" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.537629 4706 scope.go:117] "RemoveContainer" containerID="624d229f27af565807b3463cc2c8ccd6f46422b6125fec91409e8b571d65a8ab" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.538589 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 11:58:20 crc kubenswrapper[4706]: E1125 11:58:20.538971 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f4468bc-ad45-4f59-8911-b4fc57f942d3" containerName="nova-metadata-log" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.538988 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f4468bc-ad45-4f59-8911-b4fc57f942d3" containerName="nova-metadata-log" Nov 25 11:58:20 crc kubenswrapper[4706]: E1125 11:58:20.539004 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f4468bc-ad45-4f59-8911-b4fc57f942d3" containerName="nova-metadata-metadata" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.539010 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f4468bc-ad45-4f59-8911-b4fc57f942d3" containerName="nova-metadata-metadata" Nov 25 11:58:20 crc kubenswrapper[4706]: E1125 11:58:20.539025 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d560e53-d5ef-4b6b-af31-d1b5856dbf47" containerName="dnsmasq-dns" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.539031 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d560e53-d5ef-4b6b-af31-d1b5856dbf47" containerName="dnsmasq-dns" Nov 25 11:58:20 crc kubenswrapper[4706]: E1125 11:58:20.539048 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b100f787-7064-4cac-b5dc-0267ee51f1aa" containerName="nova-manage" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.539053 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="b100f787-7064-4cac-b5dc-0267ee51f1aa" containerName="nova-manage" Nov 25 11:58:20 crc kubenswrapper[4706]: E1125 11:58:20.539067 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d560e53-d5ef-4b6b-af31-d1b5856dbf47" containerName="init" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.539075 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d560e53-d5ef-4b6b-af31-d1b5856dbf47" containerName="init" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.539257 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f4468bc-ad45-4f59-8911-b4fc57f942d3" containerName="nova-metadata-log" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.539273 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f4468bc-ad45-4f59-8911-b4fc57f942d3" containerName="nova-metadata-metadata" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.539291 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d560e53-d5ef-4b6b-af31-d1b5856dbf47" containerName="dnsmasq-dns" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.539316 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="b100f787-7064-4cac-b5dc-0267ee51f1aa" containerName="nova-manage" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.540285 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 11:58:20 crc kubenswrapper[4706]: E1125 11:58:20.540538 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"624d229f27af565807b3463cc2c8ccd6f46422b6125fec91409e8b571d65a8ab\": container with ID starting with 624d229f27af565807b3463cc2c8ccd6f46422b6125fec91409e8b571d65a8ab not found: ID does not exist" containerID="624d229f27af565807b3463cc2c8ccd6f46422b6125fec91409e8b571d65a8ab" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.540589 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"624d229f27af565807b3463cc2c8ccd6f46422b6125fec91409e8b571d65a8ab"} err="failed to get container status \"624d229f27af565807b3463cc2c8ccd6f46422b6125fec91409e8b571d65a8ab\": rpc error: code = NotFound desc = could not find container \"624d229f27af565807b3463cc2c8ccd6f46422b6125fec91409e8b571d65a8ab\": container with ID starting with 624d229f27af565807b3463cc2c8ccd6f46422b6125fec91409e8b571d65a8ab not found: ID does not exist" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.540622 4706 scope.go:117] "RemoveContainer" containerID="cab34efea932a65c680b98a6e984792d8740bde207155d06296875f67e17d10a" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.540997 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cab34efea932a65c680b98a6e984792d8740bde207155d06296875f67e17d10a"} err="failed to get container status \"cab34efea932a65c680b98a6e984792d8740bde207155d06296875f67e17d10a\": rpc error: code = NotFound desc = could not find container \"cab34efea932a65c680b98a6e984792d8740bde207155d06296875f67e17d10a\": container with ID starting with cab34efea932a65c680b98a6e984792d8740bde207155d06296875f67e17d10a not found: ID does not exist" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.541016 4706 scope.go:117] "RemoveContainer" containerID="624d229f27af565807b3463cc2c8ccd6f46422b6125fec91409e8b571d65a8ab" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.541233 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"624d229f27af565807b3463cc2c8ccd6f46422b6125fec91409e8b571d65a8ab"} err="failed to get container status \"624d229f27af565807b3463cc2c8ccd6f46422b6125fec91409e8b571d65a8ab\": rpc error: code = NotFound desc = could not find container \"624d229f27af565807b3463cc2c8ccd6f46422b6125fec91409e8b571d65a8ab\": container with ID starting with 624d229f27af565807b3463cc2c8ccd6f46422b6125fec91409e8b571d65a8ab not found: ID does not exist" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.549051 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.549175 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.561643 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.613666 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab5ba648-4cd1-4304-9470-e10ea703d56d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ab5ba648-4cd1-4304-9470-e10ea703d56d\") " pod="openstack/nova-metadata-0" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.613844 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab5ba648-4cd1-4304-9470-e10ea703d56d-logs\") pod \"nova-metadata-0\" (UID: \"ab5ba648-4cd1-4304-9470-e10ea703d56d\") " pod="openstack/nova-metadata-0" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.613890 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82jws\" (UniqueName: \"kubernetes.io/projected/ab5ba648-4cd1-4304-9470-e10ea703d56d-kube-api-access-82jws\") pod \"nova-metadata-0\" (UID: \"ab5ba648-4cd1-4304-9470-e10ea703d56d\") " pod="openstack/nova-metadata-0" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.613983 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab5ba648-4cd1-4304-9470-e10ea703d56d-config-data\") pod \"nova-metadata-0\" (UID: \"ab5ba648-4cd1-4304-9470-e10ea703d56d\") " pod="openstack/nova-metadata-0" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.614022 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab5ba648-4cd1-4304-9470-e10ea703d56d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ab5ba648-4cd1-4304-9470-e10ea703d56d\") " pod="openstack/nova-metadata-0" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.715665 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82jws\" (UniqueName: \"kubernetes.io/projected/ab5ba648-4cd1-4304-9470-e10ea703d56d-kube-api-access-82jws\") pod \"nova-metadata-0\" (UID: \"ab5ba648-4cd1-4304-9470-e10ea703d56d\") " pod="openstack/nova-metadata-0" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.715788 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab5ba648-4cd1-4304-9470-e10ea703d56d-config-data\") pod \"nova-metadata-0\" (UID: \"ab5ba648-4cd1-4304-9470-e10ea703d56d\") " pod="openstack/nova-metadata-0" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.715864 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab5ba648-4cd1-4304-9470-e10ea703d56d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ab5ba648-4cd1-4304-9470-e10ea703d56d\") " pod="openstack/nova-metadata-0" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.715912 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab5ba648-4cd1-4304-9470-e10ea703d56d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ab5ba648-4cd1-4304-9470-e10ea703d56d\") " pod="openstack/nova-metadata-0" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.715986 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab5ba648-4cd1-4304-9470-e10ea703d56d-logs\") pod \"nova-metadata-0\" (UID: \"ab5ba648-4cd1-4304-9470-e10ea703d56d\") " pod="openstack/nova-metadata-0" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.716563 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab5ba648-4cd1-4304-9470-e10ea703d56d-logs\") pod \"nova-metadata-0\" (UID: \"ab5ba648-4cd1-4304-9470-e10ea703d56d\") " pod="openstack/nova-metadata-0" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.720376 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab5ba648-4cd1-4304-9470-e10ea703d56d-config-data\") pod \"nova-metadata-0\" (UID: \"ab5ba648-4cd1-4304-9470-e10ea703d56d\") " pod="openstack/nova-metadata-0" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.724820 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab5ba648-4cd1-4304-9470-e10ea703d56d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ab5ba648-4cd1-4304-9470-e10ea703d56d\") " pod="openstack/nova-metadata-0" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.728880 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab5ba648-4cd1-4304-9470-e10ea703d56d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ab5ba648-4cd1-4304-9470-e10ea703d56d\") " pod="openstack/nova-metadata-0" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.732719 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82jws\" (UniqueName: \"kubernetes.io/projected/ab5ba648-4cd1-4304-9470-e10ea703d56d-kube-api-access-82jws\") pod \"nova-metadata-0\" (UID: \"ab5ba648-4cd1-4304-9470-e10ea703d56d\") " pod="openstack/nova-metadata-0" Nov 25 11:58:20 crc kubenswrapper[4706]: I1125 11:58:20.872544 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.187608 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.328976 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-run-httpd\") pod \"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f\" (UID: \"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f\") " Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.329067 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-config-data\") pod \"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f\" (UID: \"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f\") " Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.329166 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-combined-ca-bundle\") pod \"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f\" (UID: \"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f\") " Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.329227 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-log-httpd\") pod \"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f\" (UID: \"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f\") " Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.329265 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6v6m\" (UniqueName: \"kubernetes.io/projected/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-kube-api-access-g6v6m\") pod \"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f\" (UID: \"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f\") " Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.329355 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-sg-core-conf-yaml\") pod \"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f\" (UID: \"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f\") " Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.329392 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-scripts\") pod \"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f\" (UID: \"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f\") " Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.330098 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f" (UID: "3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.330682 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f" (UID: "3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.334456 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-scripts" (OuterVolumeSpecName: "scripts") pod "3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f" (UID: "3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.334574 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-kube-api-access-g6v6m" (OuterVolumeSpecName: "kube-api-access-g6v6m") pod "3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f" (UID: "3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f"). InnerVolumeSpecName "kube-api-access-g6v6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.358920 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f" (UID: "3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.426536 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f" (UID: "3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.427470 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 11:58:21 crc kubenswrapper[4706]: W1125 11:58:21.431705 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab5ba648_4cd1_4304_9470_e10ea703d56d.slice/crio-1df6673ccc1706ccff1218e8430a218c9e04c69d5c815562f157b9e2d8d10f33 WatchSource:0}: Error finding container 1df6673ccc1706ccff1218e8430a218c9e04c69d5c815562f157b9e2d8d10f33: Status 404 returned error can't find the container with id 1df6673ccc1706ccff1218e8430a218c9e04c69d5c815562f157b9e2d8d10f33 Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.432280 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.432556 4706 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.432640 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6v6m\" (UniqueName: \"kubernetes.io/projected/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-kube-api-access-g6v6m\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.432726 4706 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.432809 4706 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.432892 4706 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.457016 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-config-data" (OuterVolumeSpecName: "config-data") pod "3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f" (UID: "3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.500846 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab5ba648-4cd1-4304-9470-e10ea703d56d","Type":"ContainerStarted","Data":"1df6673ccc1706ccff1218e8430a218c9e04c69d5c815562f157b9e2d8d10f33"} Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.505692 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.506658 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f","Type":"ContainerDied","Data":"c84d267a41fa7548dfe22dc46bedeb33d5ad0d840a3bfa29fed7b6a6cbcd2523"} Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.506697 4706 scope.go:117] "RemoveContainer" containerID="67864c33547591b87be529165564a21dc3207d413ee9736f09fce07b61e0f127" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.506891 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="1dfcf8c4-dafb-4718-b97d-d0b72e9cff85" containerName="nova-scheduler-scheduler" containerID="cri-o://ef241260c1cbe817bb94689eae45d934ab69fa96a5ffe387e49137fd360175c1" gracePeriod=30 Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.535913 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.551535 4706 scope.go:117] "RemoveContainer" containerID="53ab2df770b270d546ef9e435e3a0f4ec580df8b785873c38f798f12f2668394" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.551809 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.574905 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.592699 4706 scope.go:117] "RemoveContainer" containerID="9d0124bcc1ee48b4329bb8703782a460504d628f4b5406382971aded6556e60a" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.594930 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:58:21 crc kubenswrapper[4706]: E1125 11:58:21.595322 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f" containerName="proxy-httpd" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.595333 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f" containerName="proxy-httpd" Nov 25 11:58:21 crc kubenswrapper[4706]: E1125 11:58:21.595354 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f" containerName="sg-core" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.595360 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f" containerName="sg-core" Nov 25 11:58:21 crc kubenswrapper[4706]: E1125 11:58:21.595372 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f" containerName="ceilometer-central-agent" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.595378 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f" containerName="ceilometer-central-agent" Nov 25 11:58:21 crc kubenswrapper[4706]: E1125 11:58:21.595400 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f" containerName="ceilometer-notification-agent" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.595407 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f" containerName="ceilometer-notification-agent" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.595574 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f" containerName="ceilometer-central-agent" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.595590 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f" containerName="sg-core" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.595603 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f" containerName="proxy-httpd" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.595615 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f" containerName="ceilometer-notification-agent" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.597487 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.610982 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.611718 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.611960 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.612103 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.634439 4706 scope.go:117] "RemoveContainer" containerID="f3f8cd889caa95db731df251888a7c1a3ce9d080796aa96191596b79dd853b9b" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.762533 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c29287a1-7481-405e-8641-8300768eb2cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c29287a1-7481-405e-8641-8300768eb2cb\") " pod="openstack/ceilometer-0" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.762569 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29287a1-7481-405e-8641-8300768eb2cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c29287a1-7481-405e-8641-8300768eb2cb\") " pod="openstack/ceilometer-0" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.762667 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c29287a1-7481-405e-8641-8300768eb2cb-config-data\") pod \"ceilometer-0\" (UID: \"c29287a1-7481-405e-8641-8300768eb2cb\") " pod="openstack/ceilometer-0" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.762799 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4rzk\" (UniqueName: \"kubernetes.io/projected/c29287a1-7481-405e-8641-8300768eb2cb-kube-api-access-m4rzk\") pod \"ceilometer-0\" (UID: \"c29287a1-7481-405e-8641-8300768eb2cb\") " pod="openstack/ceilometer-0" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.762848 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c29287a1-7481-405e-8641-8300768eb2cb-log-httpd\") pod \"ceilometer-0\" (UID: \"c29287a1-7481-405e-8641-8300768eb2cb\") " pod="openstack/ceilometer-0" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.762870 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c29287a1-7481-405e-8641-8300768eb2cb-run-httpd\") pod \"ceilometer-0\" (UID: \"c29287a1-7481-405e-8641-8300768eb2cb\") " pod="openstack/ceilometer-0" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.762912 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c29287a1-7481-405e-8641-8300768eb2cb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c29287a1-7481-405e-8641-8300768eb2cb\") " pod="openstack/ceilometer-0" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.762973 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c29287a1-7481-405e-8641-8300768eb2cb-scripts\") pod \"ceilometer-0\" (UID: \"c29287a1-7481-405e-8641-8300768eb2cb\") " pod="openstack/ceilometer-0" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.864932 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4rzk\" (UniqueName: \"kubernetes.io/projected/c29287a1-7481-405e-8641-8300768eb2cb-kube-api-access-m4rzk\") pod \"ceilometer-0\" (UID: \"c29287a1-7481-405e-8641-8300768eb2cb\") " pod="openstack/ceilometer-0" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.865016 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c29287a1-7481-405e-8641-8300768eb2cb-log-httpd\") pod \"ceilometer-0\" (UID: \"c29287a1-7481-405e-8641-8300768eb2cb\") " pod="openstack/ceilometer-0" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.865044 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c29287a1-7481-405e-8641-8300768eb2cb-run-httpd\") pod \"ceilometer-0\" (UID: \"c29287a1-7481-405e-8641-8300768eb2cb\") " pod="openstack/ceilometer-0" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.865093 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c29287a1-7481-405e-8641-8300768eb2cb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c29287a1-7481-405e-8641-8300768eb2cb\") " pod="openstack/ceilometer-0" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.865119 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c29287a1-7481-405e-8641-8300768eb2cb-scripts\") pod \"ceilometer-0\" (UID: \"c29287a1-7481-405e-8641-8300768eb2cb\") " pod="openstack/ceilometer-0" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.865237 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c29287a1-7481-405e-8641-8300768eb2cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c29287a1-7481-405e-8641-8300768eb2cb\") " pod="openstack/ceilometer-0" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.865262 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29287a1-7481-405e-8641-8300768eb2cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c29287a1-7481-405e-8641-8300768eb2cb\") " pod="openstack/ceilometer-0" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.865282 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c29287a1-7481-405e-8641-8300768eb2cb-config-data\") pod \"ceilometer-0\" (UID: \"c29287a1-7481-405e-8641-8300768eb2cb\") " pod="openstack/ceilometer-0" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.868389 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c29287a1-7481-405e-8641-8300768eb2cb-log-httpd\") pod \"ceilometer-0\" (UID: \"c29287a1-7481-405e-8641-8300768eb2cb\") " pod="openstack/ceilometer-0" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.868443 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c29287a1-7481-405e-8641-8300768eb2cb-run-httpd\") pod \"ceilometer-0\" (UID: \"c29287a1-7481-405e-8641-8300768eb2cb\") " pod="openstack/ceilometer-0" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.871858 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c29287a1-7481-405e-8641-8300768eb2cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c29287a1-7481-405e-8641-8300768eb2cb\") " pod="openstack/ceilometer-0" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.872551 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c29287a1-7481-405e-8641-8300768eb2cb-scripts\") pod \"ceilometer-0\" (UID: \"c29287a1-7481-405e-8641-8300768eb2cb\") " pod="openstack/ceilometer-0" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.875367 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c29287a1-7481-405e-8641-8300768eb2cb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c29287a1-7481-405e-8641-8300768eb2cb\") " pod="openstack/ceilometer-0" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.875783 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29287a1-7481-405e-8641-8300768eb2cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c29287a1-7481-405e-8641-8300768eb2cb\") " pod="openstack/ceilometer-0" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.879560 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c29287a1-7481-405e-8641-8300768eb2cb-config-data\") pod \"ceilometer-0\" (UID: \"c29287a1-7481-405e-8641-8300768eb2cb\") " pod="openstack/ceilometer-0" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.890432 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4rzk\" (UniqueName: \"kubernetes.io/projected/c29287a1-7481-405e-8641-8300768eb2cb-kube-api-access-m4rzk\") pod \"ceilometer-0\" (UID: \"c29287a1-7481-405e-8641-8300768eb2cb\") " pod="openstack/ceilometer-0" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.921141 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.948480 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f" path="/var/lib/kubelet/pods/3df2b1f2-0fee-454e-a77d-8ae5ce76ed9f/volumes" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.949405 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f4468bc-ad45-4f59-8911-b4fc57f942d3" path="/var/lib/kubelet/pods/4f4468bc-ad45-4f59-8911-b4fc57f942d3/volumes" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.950093 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d560e53-d5ef-4b6b-af31-d1b5856dbf47" path="/var/lib/kubelet/pods/9d560e53-d5ef-4b6b-af31-d1b5856dbf47/volumes" Nov 25 11:58:21 crc kubenswrapper[4706]: I1125 11:58:21.961168 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-87sfg" Nov 25 11:58:22 crc kubenswrapper[4706]: I1125 11:58:22.070006 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca66dab3-01b2-4fac-b6c9-c09b2704a670-scripts\") pod \"ca66dab3-01b2-4fac-b6c9-c09b2704a670\" (UID: \"ca66dab3-01b2-4fac-b6c9-c09b2704a670\") " Nov 25 11:58:22 crc kubenswrapper[4706]: I1125 11:58:22.070080 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca66dab3-01b2-4fac-b6c9-c09b2704a670-config-data\") pod \"ca66dab3-01b2-4fac-b6c9-c09b2704a670\" (UID: \"ca66dab3-01b2-4fac-b6c9-c09b2704a670\") " Nov 25 11:58:22 crc kubenswrapper[4706]: I1125 11:58:22.070159 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca66dab3-01b2-4fac-b6c9-c09b2704a670-combined-ca-bundle\") pod \"ca66dab3-01b2-4fac-b6c9-c09b2704a670\" (UID: \"ca66dab3-01b2-4fac-b6c9-c09b2704a670\") " Nov 25 11:58:22 crc kubenswrapper[4706]: I1125 11:58:22.070199 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dg42n\" (UniqueName: \"kubernetes.io/projected/ca66dab3-01b2-4fac-b6c9-c09b2704a670-kube-api-access-dg42n\") pod \"ca66dab3-01b2-4fac-b6c9-c09b2704a670\" (UID: \"ca66dab3-01b2-4fac-b6c9-c09b2704a670\") " Nov 25 11:58:22 crc kubenswrapper[4706]: I1125 11:58:22.074390 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca66dab3-01b2-4fac-b6c9-c09b2704a670-scripts" (OuterVolumeSpecName: "scripts") pod "ca66dab3-01b2-4fac-b6c9-c09b2704a670" (UID: "ca66dab3-01b2-4fac-b6c9-c09b2704a670"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:58:22 crc kubenswrapper[4706]: I1125 11:58:22.077491 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca66dab3-01b2-4fac-b6c9-c09b2704a670-kube-api-access-dg42n" (OuterVolumeSpecName: "kube-api-access-dg42n") pod "ca66dab3-01b2-4fac-b6c9-c09b2704a670" (UID: "ca66dab3-01b2-4fac-b6c9-c09b2704a670"). InnerVolumeSpecName "kube-api-access-dg42n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:58:22 crc kubenswrapper[4706]: I1125 11:58:22.102764 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca66dab3-01b2-4fac-b6c9-c09b2704a670-config-data" (OuterVolumeSpecName: "config-data") pod "ca66dab3-01b2-4fac-b6c9-c09b2704a670" (UID: "ca66dab3-01b2-4fac-b6c9-c09b2704a670"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:58:22 crc kubenswrapper[4706]: I1125 11:58:22.111315 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca66dab3-01b2-4fac-b6c9-c09b2704a670-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca66dab3-01b2-4fac-b6c9-c09b2704a670" (UID: "ca66dab3-01b2-4fac-b6c9-c09b2704a670"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:58:22 crc kubenswrapper[4706]: I1125 11:58:22.174795 4706 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca66dab3-01b2-4fac-b6c9-c09b2704a670-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:22 crc kubenswrapper[4706]: I1125 11:58:22.174836 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca66dab3-01b2-4fac-b6c9-c09b2704a670-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:22 crc kubenswrapper[4706]: I1125 11:58:22.174848 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca66dab3-01b2-4fac-b6c9-c09b2704a670-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:22 crc kubenswrapper[4706]: I1125 11:58:22.174864 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dg42n\" (UniqueName: \"kubernetes.io/projected/ca66dab3-01b2-4fac-b6c9-c09b2704a670-kube-api-access-dg42n\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:22 crc kubenswrapper[4706]: I1125 11:58:22.262438 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:58:22 crc kubenswrapper[4706]: W1125 11:58:22.268786 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc29287a1_7481_405e_8641_8300768eb2cb.slice/crio-10b00a82d464de25b8381007a1d1f44a6ebf889cf25d9ff5fef250e5065be2c5 WatchSource:0}: Error finding container 10b00a82d464de25b8381007a1d1f44a6ebf889cf25d9ff5fef250e5065be2c5: Status 404 returned error can't find the container with id 10b00a82d464de25b8381007a1d1f44a6ebf889cf25d9ff5fef250e5065be2c5 Nov 25 11:58:22 crc kubenswrapper[4706]: I1125 11:58:22.517124 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c29287a1-7481-405e-8641-8300768eb2cb","Type":"ContainerStarted","Data":"10b00a82d464de25b8381007a1d1f44a6ebf889cf25d9ff5fef250e5065be2c5"} Nov 25 11:58:22 crc kubenswrapper[4706]: I1125 11:58:22.519551 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab5ba648-4cd1-4304-9470-e10ea703d56d","Type":"ContainerStarted","Data":"2d9dbeb66fdecd423ec896129d3be8705b4645c81b637763068ba0d500828586"} Nov 25 11:58:22 crc kubenswrapper[4706]: I1125 11:58:22.519579 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab5ba648-4cd1-4304-9470-e10ea703d56d","Type":"ContainerStarted","Data":"981d8cccc856fff1da7933bb683dbbe98131d72f363f703346716b8cc851fab0"} Nov 25 11:58:22 crc kubenswrapper[4706]: I1125 11:58:22.523933 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-87sfg" event={"ID":"ca66dab3-01b2-4fac-b6c9-c09b2704a670","Type":"ContainerDied","Data":"3e39b9703a7a95bf7a14a6f3c9ccd658d28a126819ce8e3d0000d1eaba584128"} Nov 25 11:58:22 crc kubenswrapper[4706]: I1125 11:58:22.523954 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e39b9703a7a95bf7a14a6f3c9ccd658d28a126819ce8e3d0000d1eaba584128" Nov 25 11:58:22 crc kubenswrapper[4706]: I1125 11:58:22.523991 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-87sfg" Nov 25 11:58:22 crc kubenswrapper[4706]: I1125 11:58:22.551501 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.5514850730000003 podStartE2EDuration="2.551485073s" podCreationTimestamp="2025-11-25 11:58:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:58:22.535913291 +0000 UTC m=+1311.450470682" watchObservedRunningTime="2025-11-25 11:58:22.551485073 +0000 UTC m=+1311.466042454" Nov 25 11:58:22 crc kubenswrapper[4706]: I1125 11:58:22.581416 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 25 11:58:22 crc kubenswrapper[4706]: E1125 11:58:22.581963 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca66dab3-01b2-4fac-b6c9-c09b2704a670" containerName="nova-cell1-conductor-db-sync" Nov 25 11:58:22 crc kubenswrapper[4706]: I1125 11:58:22.581985 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca66dab3-01b2-4fac-b6c9-c09b2704a670" containerName="nova-cell1-conductor-db-sync" Nov 25 11:58:22 crc kubenswrapper[4706]: I1125 11:58:22.582183 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca66dab3-01b2-4fac-b6c9-c09b2704a670" containerName="nova-cell1-conductor-db-sync" Nov 25 11:58:22 crc kubenswrapper[4706]: I1125 11:58:22.582763 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 25 11:58:22 crc kubenswrapper[4706]: I1125 11:58:22.585111 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 25 11:58:22 crc kubenswrapper[4706]: I1125 11:58:22.590774 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 25 11:58:22 crc kubenswrapper[4706]: I1125 11:58:22.685765 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/125dfab1-ad73-40ed-bd12-3e061e6b0ec2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"125dfab1-ad73-40ed-bd12-3e061e6b0ec2\") " pod="openstack/nova-cell1-conductor-0" Nov 25 11:58:22 crc kubenswrapper[4706]: I1125 11:58:22.685842 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/125dfab1-ad73-40ed-bd12-3e061e6b0ec2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"125dfab1-ad73-40ed-bd12-3e061e6b0ec2\") " pod="openstack/nova-cell1-conductor-0" Nov 25 11:58:22 crc kubenswrapper[4706]: I1125 11:58:22.685927 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbrxx\" (UniqueName: \"kubernetes.io/projected/125dfab1-ad73-40ed-bd12-3e061e6b0ec2-kube-api-access-sbrxx\") pod \"nova-cell1-conductor-0\" (UID: \"125dfab1-ad73-40ed-bd12-3e061e6b0ec2\") " pod="openstack/nova-cell1-conductor-0" Nov 25 11:58:22 crc kubenswrapper[4706]: I1125 11:58:22.788254 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/125dfab1-ad73-40ed-bd12-3e061e6b0ec2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"125dfab1-ad73-40ed-bd12-3e061e6b0ec2\") " pod="openstack/nova-cell1-conductor-0" Nov 25 11:58:22 crc kubenswrapper[4706]: I1125 11:58:22.788608 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/125dfab1-ad73-40ed-bd12-3e061e6b0ec2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"125dfab1-ad73-40ed-bd12-3e061e6b0ec2\") " pod="openstack/nova-cell1-conductor-0" Nov 25 11:58:22 crc kubenswrapper[4706]: I1125 11:58:22.788705 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbrxx\" (UniqueName: \"kubernetes.io/projected/125dfab1-ad73-40ed-bd12-3e061e6b0ec2-kube-api-access-sbrxx\") pod \"nova-cell1-conductor-0\" (UID: \"125dfab1-ad73-40ed-bd12-3e061e6b0ec2\") " pod="openstack/nova-cell1-conductor-0" Nov 25 11:58:22 crc kubenswrapper[4706]: I1125 11:58:22.797249 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/125dfab1-ad73-40ed-bd12-3e061e6b0ec2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"125dfab1-ad73-40ed-bd12-3e061e6b0ec2\") " pod="openstack/nova-cell1-conductor-0" Nov 25 11:58:22 crc kubenswrapper[4706]: I1125 11:58:22.804859 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/125dfab1-ad73-40ed-bd12-3e061e6b0ec2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"125dfab1-ad73-40ed-bd12-3e061e6b0ec2\") " pod="openstack/nova-cell1-conductor-0" Nov 25 11:58:22 crc kubenswrapper[4706]: I1125 11:58:22.808093 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbrxx\" (UniqueName: \"kubernetes.io/projected/125dfab1-ad73-40ed-bd12-3e061e6b0ec2-kube-api-access-sbrxx\") pod \"nova-cell1-conductor-0\" (UID: \"125dfab1-ad73-40ed-bd12-3e061e6b0ec2\") " pod="openstack/nova-cell1-conductor-0" Nov 25 11:58:22 crc kubenswrapper[4706]: I1125 11:58:22.924754 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 25 11:58:23 crc kubenswrapper[4706]: I1125 11:58:23.536081 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c29287a1-7481-405e-8641-8300768eb2cb","Type":"ContainerStarted","Data":"88b1fa76bc4a05b1d800094737e0d8450adb0cdde2f2103ccfe40dd18350602f"} Nov 25 11:58:23 crc kubenswrapper[4706]: I1125 11:58:23.620057 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 25 11:58:23 crc kubenswrapper[4706]: E1125 11:58:23.889980 4706 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ef241260c1cbe817bb94689eae45d934ab69fa96a5ffe387e49137fd360175c1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 11:58:23 crc kubenswrapper[4706]: E1125 11:58:23.891492 4706 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ef241260c1cbe817bb94689eae45d934ab69fa96a5ffe387e49137fd360175c1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 11:58:23 crc kubenswrapper[4706]: E1125 11:58:23.892471 4706 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ef241260c1cbe817bb94689eae45d934ab69fa96a5ffe387e49137fd360175c1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 11:58:23 crc kubenswrapper[4706]: E1125 11:58:23.892515 4706 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="1dfcf8c4-dafb-4718-b97d-d0b72e9cff85" containerName="nova-scheduler-scheduler" Nov 25 11:58:24 crc kubenswrapper[4706]: I1125 11:58:24.549643 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"125dfab1-ad73-40ed-bd12-3e061e6b0ec2","Type":"ContainerStarted","Data":"60a9cf13043fe8f8c2b5f0354d9e27ce95422b8a25aca23fc5a052af5f37233d"} Nov 25 11:58:24 crc kubenswrapper[4706]: I1125 11:58:24.549943 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"125dfab1-ad73-40ed-bd12-3e061e6b0ec2","Type":"ContainerStarted","Data":"a73785d91d2fff6d3e80ff89fadba6b79f3973f3a43519728fb489bc07884ab5"} Nov 25 11:58:24 crc kubenswrapper[4706]: I1125 11:58:24.549989 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 25 11:58:24 crc kubenswrapper[4706]: I1125 11:58:24.552162 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c29287a1-7481-405e-8641-8300768eb2cb","Type":"ContainerStarted","Data":"d02ccd3ede20522a7a1b48fd7ce7fa9ce2ad19d4f049fac66027a2ac47f8d096"} Nov 25 11:58:24 crc kubenswrapper[4706]: I1125 11:58:24.571928 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.571912359 podStartE2EDuration="2.571912359s" podCreationTimestamp="2025-11-25 11:58:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:58:24.568737029 +0000 UTC m=+1313.483294410" watchObservedRunningTime="2025-11-25 11:58:24.571912359 +0000 UTC m=+1313.486469740" Nov 25 11:58:25 crc kubenswrapper[4706]: I1125 11:58:25.566125 4706 generic.go:334] "Generic (PLEG): container finished" podID="62968efd-c3bc-4ccb-892f-b1479a5da4cc" containerID="7446f31b337cd4625add204856cf1a631ec9341af4fe1f59547a39610254999f" exitCode=0 Nov 25 11:58:25 crc kubenswrapper[4706]: I1125 11:58:25.566201 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"62968efd-c3bc-4ccb-892f-b1479a5da4cc","Type":"ContainerDied","Data":"7446f31b337cd4625add204856cf1a631ec9341af4fe1f59547a39610254999f"} Nov 25 11:58:25 crc kubenswrapper[4706]: I1125 11:58:25.568767 4706 generic.go:334] "Generic (PLEG): container finished" podID="1dfcf8c4-dafb-4718-b97d-d0b72e9cff85" containerID="ef241260c1cbe817bb94689eae45d934ab69fa96a5ffe387e49137fd360175c1" exitCode=0 Nov 25 11:58:25 crc kubenswrapper[4706]: I1125 11:58:25.568830 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1dfcf8c4-dafb-4718-b97d-d0b72e9cff85","Type":"ContainerDied","Data":"ef241260c1cbe817bb94689eae45d934ab69fa96a5ffe387e49137fd360175c1"} Nov 25 11:58:25 crc kubenswrapper[4706]: I1125 11:58:25.571074 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c29287a1-7481-405e-8641-8300768eb2cb","Type":"ContainerStarted","Data":"b6e967141e0e69251d543ef222085c21a8d97a814100faa261dd97704a4004e4"} Nov 25 11:58:25 crc kubenswrapper[4706]: I1125 11:58:25.625465 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 11:58:25 crc kubenswrapper[4706]: I1125 11:58:25.749541 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62968efd-c3bc-4ccb-892f-b1479a5da4cc-combined-ca-bundle\") pod \"62968efd-c3bc-4ccb-892f-b1479a5da4cc\" (UID: \"62968efd-c3bc-4ccb-892f-b1479a5da4cc\") " Nov 25 11:58:25 crc kubenswrapper[4706]: I1125 11:58:25.749693 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62968efd-c3bc-4ccb-892f-b1479a5da4cc-config-data\") pod \"62968efd-c3bc-4ccb-892f-b1479a5da4cc\" (UID: \"62968efd-c3bc-4ccb-892f-b1479a5da4cc\") " Nov 25 11:58:25 crc kubenswrapper[4706]: I1125 11:58:25.749729 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62968efd-c3bc-4ccb-892f-b1479a5da4cc-logs\") pod \"62968efd-c3bc-4ccb-892f-b1479a5da4cc\" (UID: \"62968efd-c3bc-4ccb-892f-b1479a5da4cc\") " Nov 25 11:58:25 crc kubenswrapper[4706]: I1125 11:58:25.749858 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmrgg\" (UniqueName: \"kubernetes.io/projected/62968efd-c3bc-4ccb-892f-b1479a5da4cc-kube-api-access-hmrgg\") pod \"62968efd-c3bc-4ccb-892f-b1479a5da4cc\" (UID: \"62968efd-c3bc-4ccb-892f-b1479a5da4cc\") " Nov 25 11:58:25 crc kubenswrapper[4706]: I1125 11:58:25.752284 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62968efd-c3bc-4ccb-892f-b1479a5da4cc-logs" (OuterVolumeSpecName: "logs") pod "62968efd-c3bc-4ccb-892f-b1479a5da4cc" (UID: "62968efd-c3bc-4ccb-892f-b1479a5da4cc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:58:25 crc kubenswrapper[4706]: I1125 11:58:25.756014 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62968efd-c3bc-4ccb-892f-b1479a5da4cc-kube-api-access-hmrgg" (OuterVolumeSpecName: "kube-api-access-hmrgg") pod "62968efd-c3bc-4ccb-892f-b1479a5da4cc" (UID: "62968efd-c3bc-4ccb-892f-b1479a5da4cc"). InnerVolumeSpecName "kube-api-access-hmrgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:58:25 crc kubenswrapper[4706]: E1125 11:58:25.781219 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62968efd-c3bc-4ccb-892f-b1479a5da4cc-config-data podName:62968efd-c3bc-4ccb-892f-b1479a5da4cc nodeName:}" failed. No retries permitted until 2025-11-25 11:58:26.281187277 +0000 UTC m=+1315.195744668 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/62968efd-c3bc-4ccb-892f-b1479a5da4cc-config-data") pod "62968efd-c3bc-4ccb-892f-b1479a5da4cc" (UID: "62968efd-c3bc-4ccb-892f-b1479a5da4cc") : error deleting /var/lib/kubelet/pods/62968efd-c3bc-4ccb-892f-b1479a5da4cc/volume-subpaths: remove /var/lib/kubelet/pods/62968efd-c3bc-4ccb-892f-b1479a5da4cc/volume-subpaths: no such file or directory Nov 25 11:58:25 crc kubenswrapper[4706]: I1125 11:58:25.785682 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62968efd-c3bc-4ccb-892f-b1479a5da4cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62968efd-c3bc-4ccb-892f-b1479a5da4cc" (UID: "62968efd-c3bc-4ccb-892f-b1479a5da4cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:58:25 crc kubenswrapper[4706]: I1125 11:58:25.851552 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmrgg\" (UniqueName: \"kubernetes.io/projected/62968efd-c3bc-4ccb-892f-b1479a5da4cc-kube-api-access-hmrgg\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:25 crc kubenswrapper[4706]: I1125 11:58:25.851592 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62968efd-c3bc-4ccb-892f-b1479a5da4cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:25 crc kubenswrapper[4706]: I1125 11:58:25.851602 4706 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62968efd-c3bc-4ccb-892f-b1479a5da4cc-logs\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:25 crc kubenswrapper[4706]: I1125 11:58:25.873292 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 11:58:25 crc kubenswrapper[4706]: I1125 11:58:25.873426 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.098783 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.259568 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzlks\" (UniqueName: \"kubernetes.io/projected/1dfcf8c4-dafb-4718-b97d-d0b72e9cff85-kube-api-access-bzlks\") pod \"1dfcf8c4-dafb-4718-b97d-d0b72e9cff85\" (UID: \"1dfcf8c4-dafb-4718-b97d-d0b72e9cff85\") " Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.259633 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dfcf8c4-dafb-4718-b97d-d0b72e9cff85-config-data\") pod \"1dfcf8c4-dafb-4718-b97d-d0b72e9cff85\" (UID: \"1dfcf8c4-dafb-4718-b97d-d0b72e9cff85\") " Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.259657 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dfcf8c4-dafb-4718-b97d-d0b72e9cff85-combined-ca-bundle\") pod \"1dfcf8c4-dafb-4718-b97d-d0b72e9cff85\" (UID: \"1dfcf8c4-dafb-4718-b97d-d0b72e9cff85\") " Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.271495 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dfcf8c4-dafb-4718-b97d-d0b72e9cff85-kube-api-access-bzlks" (OuterVolumeSpecName: "kube-api-access-bzlks") pod "1dfcf8c4-dafb-4718-b97d-d0b72e9cff85" (UID: "1dfcf8c4-dafb-4718-b97d-d0b72e9cff85"). InnerVolumeSpecName "kube-api-access-bzlks". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.302368 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dfcf8c4-dafb-4718-b97d-d0b72e9cff85-config-data" (OuterVolumeSpecName: "config-data") pod "1dfcf8c4-dafb-4718-b97d-d0b72e9cff85" (UID: "1dfcf8c4-dafb-4718-b97d-d0b72e9cff85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.305363 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dfcf8c4-dafb-4718-b97d-d0b72e9cff85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1dfcf8c4-dafb-4718-b97d-d0b72e9cff85" (UID: "1dfcf8c4-dafb-4718-b97d-d0b72e9cff85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.361994 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62968efd-c3bc-4ccb-892f-b1479a5da4cc-config-data\") pod \"62968efd-c3bc-4ccb-892f-b1479a5da4cc\" (UID: \"62968efd-c3bc-4ccb-892f-b1479a5da4cc\") " Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.362521 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzlks\" (UniqueName: \"kubernetes.io/projected/1dfcf8c4-dafb-4718-b97d-d0b72e9cff85-kube-api-access-bzlks\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.362538 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dfcf8c4-dafb-4718-b97d-d0b72e9cff85-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.362548 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dfcf8c4-dafb-4718-b97d-d0b72e9cff85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.365378 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62968efd-c3bc-4ccb-892f-b1479a5da4cc-config-data" (OuterVolumeSpecName: "config-data") pod "62968efd-c3bc-4ccb-892f-b1479a5da4cc" (UID: "62968efd-c3bc-4ccb-892f-b1479a5da4cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.464188 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62968efd-c3bc-4ccb-892f-b1479a5da4cc-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.588776 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"62968efd-c3bc-4ccb-892f-b1479a5da4cc","Type":"ContainerDied","Data":"cbef7341c4fcb241e42eb0880f344f532e1ef21053656261dd7fde9b1f0406ac"} Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.588804 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.588845 4706 scope.go:117] "RemoveContainer" containerID="7446f31b337cd4625add204856cf1a631ec9341af4fe1f59547a39610254999f" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.590567 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1dfcf8c4-dafb-4718-b97d-d0b72e9cff85","Type":"ContainerDied","Data":"eba3a3e01e82d44d4ba0ebcb7523c34819fca45d346861985dd7f352829acda9"} Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.590624 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.627133 4706 scope.go:117] "RemoveContainer" containerID="a2e980f8ad229edb2c569d7035e08209d34cd0fa079ca7c46fdfe3210380545f" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.667275 4706 scope.go:117] "RemoveContainer" containerID="ef241260c1cbe817bb94689eae45d934ab69fa96a5ffe387e49137fd360175c1" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.669530 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.684832 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.703322 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 11:58:26 crc kubenswrapper[4706]: E1125 11:58:26.703810 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62968efd-c3bc-4ccb-892f-b1479a5da4cc" containerName="nova-api-api" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.703831 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="62968efd-c3bc-4ccb-892f-b1479a5da4cc" containerName="nova-api-api" Nov 25 11:58:26 crc kubenswrapper[4706]: E1125 11:58:26.703867 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62968efd-c3bc-4ccb-892f-b1479a5da4cc" containerName="nova-api-log" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.703876 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="62968efd-c3bc-4ccb-892f-b1479a5da4cc" containerName="nova-api-log" Nov 25 11:58:26 crc kubenswrapper[4706]: E1125 11:58:26.703897 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dfcf8c4-dafb-4718-b97d-d0b72e9cff85" containerName="nova-scheduler-scheduler" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.703905 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dfcf8c4-dafb-4718-b97d-d0b72e9cff85" containerName="nova-scheduler-scheduler" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.704161 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="62968efd-c3bc-4ccb-892f-b1479a5da4cc" containerName="nova-api-log" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.704184 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="62968efd-c3bc-4ccb-892f-b1479a5da4cc" containerName="nova-api-api" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.704202 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dfcf8c4-dafb-4718-b97d-d0b72e9cff85" containerName="nova-scheduler-scheduler" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.704980 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.707516 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.712899 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.722438 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.730783 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.738971 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.740573 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.743082 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.749650 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.871507 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36357458-7aac-49fa-a118-5208a484df3d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"36357458-7aac-49fa-a118-5208a484df3d\") " pod="openstack/nova-scheduler-0" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.871574 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7684ae52-10e0-4b84-a8aa-9f5e744b681c-config-data\") pod \"nova-api-0\" (UID: \"7684ae52-10e0-4b84-a8aa-9f5e744b681c\") " pod="openstack/nova-api-0" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.871628 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36357458-7aac-49fa-a118-5208a484df3d-config-data\") pod \"nova-scheduler-0\" (UID: \"36357458-7aac-49fa-a118-5208a484df3d\") " pod="openstack/nova-scheduler-0" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.871686 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7684ae52-10e0-4b84-a8aa-9f5e744b681c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7684ae52-10e0-4b84-a8aa-9f5e744b681c\") " pod="openstack/nova-api-0" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.871773 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btxs9\" (UniqueName: \"kubernetes.io/projected/36357458-7aac-49fa-a118-5208a484df3d-kube-api-access-btxs9\") pod \"nova-scheduler-0\" (UID: \"36357458-7aac-49fa-a118-5208a484df3d\") " pod="openstack/nova-scheduler-0" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.871815 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7684ae52-10e0-4b84-a8aa-9f5e744b681c-logs\") pod \"nova-api-0\" (UID: \"7684ae52-10e0-4b84-a8aa-9f5e744b681c\") " pod="openstack/nova-api-0" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.871854 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb7p9\" (UniqueName: \"kubernetes.io/projected/7684ae52-10e0-4b84-a8aa-9f5e744b681c-kube-api-access-sb7p9\") pod \"nova-api-0\" (UID: \"7684ae52-10e0-4b84-a8aa-9f5e744b681c\") " pod="openstack/nova-api-0" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.973246 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36357458-7aac-49fa-a118-5208a484df3d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"36357458-7aac-49fa-a118-5208a484df3d\") " pod="openstack/nova-scheduler-0" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.973309 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7684ae52-10e0-4b84-a8aa-9f5e744b681c-config-data\") pod \"nova-api-0\" (UID: \"7684ae52-10e0-4b84-a8aa-9f5e744b681c\") " pod="openstack/nova-api-0" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.973343 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36357458-7aac-49fa-a118-5208a484df3d-config-data\") pod \"nova-scheduler-0\" (UID: \"36357458-7aac-49fa-a118-5208a484df3d\") " pod="openstack/nova-scheduler-0" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.973367 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7684ae52-10e0-4b84-a8aa-9f5e744b681c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7684ae52-10e0-4b84-a8aa-9f5e744b681c\") " pod="openstack/nova-api-0" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.973419 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btxs9\" (UniqueName: \"kubernetes.io/projected/36357458-7aac-49fa-a118-5208a484df3d-kube-api-access-btxs9\") pod \"nova-scheduler-0\" (UID: \"36357458-7aac-49fa-a118-5208a484df3d\") " pod="openstack/nova-scheduler-0" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.973445 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7684ae52-10e0-4b84-a8aa-9f5e744b681c-logs\") pod \"nova-api-0\" (UID: \"7684ae52-10e0-4b84-a8aa-9f5e744b681c\") " pod="openstack/nova-api-0" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.973489 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb7p9\" (UniqueName: \"kubernetes.io/projected/7684ae52-10e0-4b84-a8aa-9f5e744b681c-kube-api-access-sb7p9\") pod \"nova-api-0\" (UID: \"7684ae52-10e0-4b84-a8aa-9f5e744b681c\") " pod="openstack/nova-api-0" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.974111 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7684ae52-10e0-4b84-a8aa-9f5e744b681c-logs\") pod \"nova-api-0\" (UID: \"7684ae52-10e0-4b84-a8aa-9f5e744b681c\") " pod="openstack/nova-api-0" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.977739 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7684ae52-10e0-4b84-a8aa-9f5e744b681c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7684ae52-10e0-4b84-a8aa-9f5e744b681c\") " pod="openstack/nova-api-0" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.978087 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36357458-7aac-49fa-a118-5208a484df3d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"36357458-7aac-49fa-a118-5208a484df3d\") " pod="openstack/nova-scheduler-0" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.985845 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36357458-7aac-49fa-a118-5208a484df3d-config-data\") pod \"nova-scheduler-0\" (UID: \"36357458-7aac-49fa-a118-5208a484df3d\") " pod="openstack/nova-scheduler-0" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.987107 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7684ae52-10e0-4b84-a8aa-9f5e744b681c-config-data\") pod \"nova-api-0\" (UID: \"7684ae52-10e0-4b84-a8aa-9f5e744b681c\") " pod="openstack/nova-api-0" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.991248 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb7p9\" (UniqueName: \"kubernetes.io/projected/7684ae52-10e0-4b84-a8aa-9f5e744b681c-kube-api-access-sb7p9\") pod \"nova-api-0\" (UID: \"7684ae52-10e0-4b84-a8aa-9f5e744b681c\") " pod="openstack/nova-api-0" Nov 25 11:58:26 crc kubenswrapper[4706]: I1125 11:58:26.995523 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btxs9\" (UniqueName: \"kubernetes.io/projected/36357458-7aac-49fa-a118-5208a484df3d-kube-api-access-btxs9\") pod \"nova-scheduler-0\" (UID: \"36357458-7aac-49fa-a118-5208a484df3d\") " pod="openstack/nova-scheduler-0" Nov 25 11:58:27 crc kubenswrapper[4706]: I1125 11:58:27.023326 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 11:58:27 crc kubenswrapper[4706]: I1125 11:58:27.060994 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 11:58:27 crc kubenswrapper[4706]: I1125 11:58:27.606418 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c29287a1-7481-405e-8641-8300768eb2cb","Type":"ContainerStarted","Data":"a0ed0180e0bbb373b25e70abfbd1001a1ef3b5e5ef924cb4b8e0cd29801a4c53"} Nov 25 11:58:27 crc kubenswrapper[4706]: W1125 11:58:27.623820 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36357458_7aac_49fa_a118_5208a484df3d.slice/crio-624f26fbfed55adfade49fca430ca002458ee5268e96c9b60398f9da6196a70f WatchSource:0}: Error finding container 624f26fbfed55adfade49fca430ca002458ee5268e96c9b60398f9da6196a70f: Status 404 returned error can't find the container with id 624f26fbfed55adfade49fca430ca002458ee5268e96c9b60398f9da6196a70f Nov 25 11:58:27 crc kubenswrapper[4706]: I1125 11:58:27.638423 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 11:58:27 crc kubenswrapper[4706]: W1125 11:58:27.787087 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7684ae52_10e0_4b84_a8aa_9f5e744b681c.slice/crio-d82bf15982a99a1967dd99c9bee1e0087ba032281bdc5257f76b0c09a9769364 WatchSource:0}: Error finding container d82bf15982a99a1967dd99c9bee1e0087ba032281bdc5257f76b0c09a9769364: Status 404 returned error can't find the container with id d82bf15982a99a1967dd99c9bee1e0087ba032281bdc5257f76b0c09a9769364 Nov 25 11:58:27 crc kubenswrapper[4706]: I1125 11:58:27.789159 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 11:58:27 crc kubenswrapper[4706]: I1125 11:58:27.865006 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 25 11:58:27 crc kubenswrapper[4706]: I1125 11:58:27.940241 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dfcf8c4-dafb-4718-b97d-d0b72e9cff85" path="/var/lib/kubelet/pods/1dfcf8c4-dafb-4718-b97d-d0b72e9cff85/volumes" Nov 25 11:58:27 crc kubenswrapper[4706]: I1125 11:58:27.941255 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62968efd-c3bc-4ccb-892f-b1479a5da4cc" path="/var/lib/kubelet/pods/62968efd-c3bc-4ccb-892f-b1479a5da4cc/volumes" Nov 25 11:58:28 crc kubenswrapper[4706]: I1125 11:58:28.639100 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"36357458-7aac-49fa-a118-5208a484df3d","Type":"ContainerStarted","Data":"31478ca1a61cba5f2518fb62a72364d9502dd4ae830a575e2b25aee1cd2d8a43"} Nov 25 11:58:28 crc kubenswrapper[4706]: I1125 11:58:28.639151 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"36357458-7aac-49fa-a118-5208a484df3d","Type":"ContainerStarted","Data":"624f26fbfed55adfade49fca430ca002458ee5268e96c9b60398f9da6196a70f"} Nov 25 11:58:28 crc kubenswrapper[4706]: I1125 11:58:28.647591 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7684ae52-10e0-4b84-a8aa-9f5e744b681c","Type":"ContainerStarted","Data":"d768e616411dcdb6bd2fc471582c1976a7fac18d1247eba3676c8623b8d1ec65"} Nov 25 11:58:28 crc kubenswrapper[4706]: I1125 11:58:28.647636 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7684ae52-10e0-4b84-a8aa-9f5e744b681c","Type":"ContainerStarted","Data":"99efbe8098bf623b67e50576be8330f62829843e5249a1ba174bb70397214b69"} Nov 25 11:58:28 crc kubenswrapper[4706]: I1125 11:58:28.647655 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7684ae52-10e0-4b84-a8aa-9f5e744b681c","Type":"ContainerStarted","Data":"d82bf15982a99a1967dd99c9bee1e0087ba032281bdc5257f76b0c09a9769364"} Nov 25 11:58:28 crc kubenswrapper[4706]: I1125 11:58:28.647689 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 11:58:28 crc kubenswrapper[4706]: I1125 11:58:28.670019 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.670002034 podStartE2EDuration="2.670002034s" podCreationTimestamp="2025-11-25 11:58:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:58:28.659417407 +0000 UTC m=+1317.573974828" watchObservedRunningTime="2025-11-25 11:58:28.670002034 +0000 UTC m=+1317.584559405" Nov 25 11:58:28 crc kubenswrapper[4706]: I1125 11:58:28.687818 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.687790912 podStartE2EDuration="2.687790912s" podCreationTimestamp="2025-11-25 11:58:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:58:28.676933098 +0000 UTC m=+1317.591490489" watchObservedRunningTime="2025-11-25 11:58:28.687790912 +0000 UTC m=+1317.602348303" Nov 25 11:58:28 crc kubenswrapper[4706]: I1125 11:58:28.704011 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.667741196 podStartE2EDuration="7.703988521s" podCreationTimestamp="2025-11-25 11:58:21 +0000 UTC" firstStartedPulling="2025-11-25 11:58:22.271287387 +0000 UTC m=+1311.185844768" lastFinishedPulling="2025-11-25 11:58:27.307534712 +0000 UTC m=+1316.222092093" observedRunningTime="2025-11-25 11:58:28.696673616 +0000 UTC m=+1317.611231007" watchObservedRunningTime="2025-11-25 11:58:28.703988521 +0000 UTC m=+1317.618545902" Nov 25 11:58:30 crc kubenswrapper[4706]: I1125 11:58:30.873200 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 11:58:30 crc kubenswrapper[4706]: I1125 11:58:30.873600 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 11:58:31 crc kubenswrapper[4706]: I1125 11:58:31.124624 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 11:58:31 crc kubenswrapper[4706]: I1125 11:58:31.124982 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 11:58:31 crc kubenswrapper[4706]: I1125 11:58:31.884559 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ab5ba648-4cd1-4304-9470-e10ea703d56d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 11:58:31 crc kubenswrapper[4706]: I1125 11:58:31.884607 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ab5ba648-4cd1-4304-9470-e10ea703d56d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 11:58:32 crc kubenswrapper[4706]: I1125 11:58:32.024518 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 25 11:58:32 crc kubenswrapper[4706]: I1125 11:58:32.971537 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 25 11:58:37 crc kubenswrapper[4706]: I1125 11:58:37.024635 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 25 11:58:37 crc kubenswrapper[4706]: I1125 11:58:37.053440 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 25 11:58:37 crc kubenswrapper[4706]: I1125 11:58:37.063214 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 11:58:37 crc kubenswrapper[4706]: I1125 11:58:37.063282 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 11:58:37 crc kubenswrapper[4706]: I1125 11:58:37.762236 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 25 11:58:38 crc kubenswrapper[4706]: I1125 11:58:38.103609 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7684ae52-10e0-4b84-a8aa-9f5e744b681c" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 11:58:38 crc kubenswrapper[4706]: I1125 11:58:38.145594 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7684ae52-10e0-4b84-a8aa-9f5e744b681c" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 11:58:40 crc kubenswrapper[4706]: I1125 11:58:40.878415 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 11:58:40 crc kubenswrapper[4706]: I1125 11:58:40.879786 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 11:58:40 crc kubenswrapper[4706]: I1125 11:58:40.885355 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 11:58:41 crc kubenswrapper[4706]: I1125 11:58:41.773257 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 11:58:44 crc kubenswrapper[4706]: I1125 11:58:44.793950 4706 generic.go:334] "Generic (PLEG): container finished" podID="e8b5e2e3-bd67-476c-a80d-555c402d6b10" containerID="7fcc2fade0cfd4ac61dc8eb95debe757d544a2b64a5ccc888c4bec81573ba0bc" exitCode=137 Nov 25 11:58:44 crc kubenswrapper[4706]: I1125 11:58:44.794171 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e8b5e2e3-bd67-476c-a80d-555c402d6b10","Type":"ContainerDied","Data":"7fcc2fade0cfd4ac61dc8eb95debe757d544a2b64a5ccc888c4bec81573ba0bc"} Nov 25 11:58:44 crc kubenswrapper[4706]: I1125 11:58:44.794247 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e8b5e2e3-bd67-476c-a80d-555c402d6b10","Type":"ContainerDied","Data":"dd5a3bbd64fe6166def0fc32e0155c1dd26ca79adaaa8349ca1a30ffbf9fa094"} Nov 25 11:58:44 crc kubenswrapper[4706]: I1125 11:58:44.794266 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd5a3bbd64fe6166def0fc32e0155c1dd26ca79adaaa8349ca1a30ffbf9fa094" Nov 25 11:58:44 crc kubenswrapper[4706]: I1125 11:58:44.794754 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 11:58:44 crc kubenswrapper[4706]: I1125 11:58:44.940498 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8b5e2e3-bd67-476c-a80d-555c402d6b10-config-data\") pod \"e8b5e2e3-bd67-476c-a80d-555c402d6b10\" (UID: \"e8b5e2e3-bd67-476c-a80d-555c402d6b10\") " Nov 25 11:58:44 crc kubenswrapper[4706]: I1125 11:58:44.940685 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8b5e2e3-bd67-476c-a80d-555c402d6b10-combined-ca-bundle\") pod \"e8b5e2e3-bd67-476c-a80d-555c402d6b10\" (UID: \"e8b5e2e3-bd67-476c-a80d-555c402d6b10\") " Nov 25 11:58:44 crc kubenswrapper[4706]: I1125 11:58:44.940715 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgv9d\" (UniqueName: \"kubernetes.io/projected/e8b5e2e3-bd67-476c-a80d-555c402d6b10-kube-api-access-fgv9d\") pod \"e8b5e2e3-bd67-476c-a80d-555c402d6b10\" (UID: \"e8b5e2e3-bd67-476c-a80d-555c402d6b10\") " Nov 25 11:58:44 crc kubenswrapper[4706]: I1125 11:58:44.949533 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8b5e2e3-bd67-476c-a80d-555c402d6b10-kube-api-access-fgv9d" (OuterVolumeSpecName: "kube-api-access-fgv9d") pod "e8b5e2e3-bd67-476c-a80d-555c402d6b10" (UID: "e8b5e2e3-bd67-476c-a80d-555c402d6b10"). InnerVolumeSpecName "kube-api-access-fgv9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:58:44 crc kubenswrapper[4706]: I1125 11:58:44.975188 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8b5e2e3-bd67-476c-a80d-555c402d6b10-config-data" (OuterVolumeSpecName: "config-data") pod "e8b5e2e3-bd67-476c-a80d-555c402d6b10" (UID: "e8b5e2e3-bd67-476c-a80d-555c402d6b10"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:58:44 crc kubenswrapper[4706]: I1125 11:58:44.981421 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8b5e2e3-bd67-476c-a80d-555c402d6b10-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8b5e2e3-bd67-476c-a80d-555c402d6b10" (UID: "e8b5e2e3-bd67-476c-a80d-555c402d6b10"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:58:45 crc kubenswrapper[4706]: I1125 11:58:45.042928 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8b5e2e3-bd67-476c-a80d-555c402d6b10-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:45 crc kubenswrapper[4706]: I1125 11:58:45.042977 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8b5e2e3-bd67-476c-a80d-555c402d6b10-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:45 crc kubenswrapper[4706]: I1125 11:58:45.042994 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgv9d\" (UniqueName: \"kubernetes.io/projected/e8b5e2e3-bd67-476c-a80d-555c402d6b10-kube-api-access-fgv9d\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:45 crc kubenswrapper[4706]: I1125 11:58:45.805384 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 11:58:45 crc kubenswrapper[4706]: I1125 11:58:45.843557 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 11:58:45 crc kubenswrapper[4706]: I1125 11:58:45.850038 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 11:58:45 crc kubenswrapper[4706]: I1125 11:58:45.871818 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 11:58:45 crc kubenswrapper[4706]: E1125 11:58:45.873606 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8b5e2e3-bd67-476c-a80d-555c402d6b10" containerName="nova-cell1-novncproxy-novncproxy" Nov 25 11:58:45 crc kubenswrapper[4706]: I1125 11:58:45.874006 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8b5e2e3-bd67-476c-a80d-555c402d6b10" containerName="nova-cell1-novncproxy-novncproxy" Nov 25 11:58:45 crc kubenswrapper[4706]: I1125 11:58:45.874737 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8b5e2e3-bd67-476c-a80d-555c402d6b10" containerName="nova-cell1-novncproxy-novncproxy" Nov 25 11:58:45 crc kubenswrapper[4706]: I1125 11:58:45.875689 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 11:58:45 crc kubenswrapper[4706]: I1125 11:58:45.878364 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 25 11:58:45 crc kubenswrapper[4706]: I1125 11:58:45.878619 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 25 11:58:45 crc kubenswrapper[4706]: I1125 11:58:45.879762 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 25 11:58:45 crc kubenswrapper[4706]: I1125 11:58:45.886980 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 11:58:45 crc kubenswrapper[4706]: I1125 11:58:45.942440 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8b5e2e3-bd67-476c-a80d-555c402d6b10" path="/var/lib/kubelet/pods/e8b5e2e3-bd67-476c-a80d-555c402d6b10/volumes" Nov 25 11:58:46 crc kubenswrapper[4706]: I1125 11:58:46.059244 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/562e456e-a719-47cb-b220-06ccb6fc06cc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"562e456e-a719-47cb-b220-06ccb6fc06cc\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 11:58:46 crc kubenswrapper[4706]: I1125 11:58:46.059400 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4x6c\" (UniqueName: \"kubernetes.io/projected/562e456e-a719-47cb-b220-06ccb6fc06cc-kube-api-access-m4x6c\") pod \"nova-cell1-novncproxy-0\" (UID: \"562e456e-a719-47cb-b220-06ccb6fc06cc\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 11:58:46 crc kubenswrapper[4706]: I1125 11:58:46.059545 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/562e456e-a719-47cb-b220-06ccb6fc06cc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"562e456e-a719-47cb-b220-06ccb6fc06cc\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 11:58:46 crc kubenswrapper[4706]: I1125 11:58:46.059638 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/562e456e-a719-47cb-b220-06ccb6fc06cc-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"562e456e-a719-47cb-b220-06ccb6fc06cc\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 11:58:46 crc kubenswrapper[4706]: I1125 11:58:46.059671 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/562e456e-a719-47cb-b220-06ccb6fc06cc-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"562e456e-a719-47cb-b220-06ccb6fc06cc\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 11:58:46 crc kubenswrapper[4706]: I1125 11:58:46.160955 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/562e456e-a719-47cb-b220-06ccb6fc06cc-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"562e456e-a719-47cb-b220-06ccb6fc06cc\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 11:58:46 crc kubenswrapper[4706]: I1125 11:58:46.161014 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/562e456e-a719-47cb-b220-06ccb6fc06cc-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"562e456e-a719-47cb-b220-06ccb6fc06cc\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 11:58:46 crc kubenswrapper[4706]: I1125 11:58:46.161100 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/562e456e-a719-47cb-b220-06ccb6fc06cc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"562e456e-a719-47cb-b220-06ccb6fc06cc\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 11:58:46 crc kubenswrapper[4706]: I1125 11:58:46.161258 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4x6c\" (UniqueName: \"kubernetes.io/projected/562e456e-a719-47cb-b220-06ccb6fc06cc-kube-api-access-m4x6c\") pod \"nova-cell1-novncproxy-0\" (UID: \"562e456e-a719-47cb-b220-06ccb6fc06cc\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 11:58:46 crc kubenswrapper[4706]: I1125 11:58:46.161406 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/562e456e-a719-47cb-b220-06ccb6fc06cc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"562e456e-a719-47cb-b220-06ccb6fc06cc\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 11:58:46 crc kubenswrapper[4706]: I1125 11:58:46.165974 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/562e456e-a719-47cb-b220-06ccb6fc06cc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"562e456e-a719-47cb-b220-06ccb6fc06cc\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 11:58:46 crc kubenswrapper[4706]: I1125 11:58:46.166765 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/562e456e-a719-47cb-b220-06ccb6fc06cc-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"562e456e-a719-47cb-b220-06ccb6fc06cc\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 11:58:46 crc kubenswrapper[4706]: I1125 11:58:46.167523 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/562e456e-a719-47cb-b220-06ccb6fc06cc-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"562e456e-a719-47cb-b220-06ccb6fc06cc\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 11:58:46 crc kubenswrapper[4706]: I1125 11:58:46.169109 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/562e456e-a719-47cb-b220-06ccb6fc06cc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"562e456e-a719-47cb-b220-06ccb6fc06cc\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 11:58:46 crc kubenswrapper[4706]: I1125 11:58:46.178636 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4x6c\" (UniqueName: \"kubernetes.io/projected/562e456e-a719-47cb-b220-06ccb6fc06cc-kube-api-access-m4x6c\") pod \"nova-cell1-novncproxy-0\" (UID: \"562e456e-a719-47cb-b220-06ccb6fc06cc\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 11:58:46 crc kubenswrapper[4706]: I1125 11:58:46.197791 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 11:58:46 crc kubenswrapper[4706]: I1125 11:58:46.653128 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 11:58:46 crc kubenswrapper[4706]: W1125 11:58:46.661770 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod562e456e_a719_47cb_b220_06ccb6fc06cc.slice/crio-6666cf47a9eaf9eb2035ad863e3e3b953abb24adc1a9277ebd2f986796d663d5 WatchSource:0}: Error finding container 6666cf47a9eaf9eb2035ad863e3e3b953abb24adc1a9277ebd2f986796d663d5: Status 404 returned error can't find the container with id 6666cf47a9eaf9eb2035ad863e3e3b953abb24adc1a9277ebd2f986796d663d5 Nov 25 11:58:46 crc kubenswrapper[4706]: I1125 11:58:46.821030 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"562e456e-a719-47cb-b220-06ccb6fc06cc","Type":"ContainerStarted","Data":"6666cf47a9eaf9eb2035ad863e3e3b953abb24adc1a9277ebd2f986796d663d5"} Nov 25 11:58:47 crc kubenswrapper[4706]: I1125 11:58:47.065219 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 11:58:47 crc kubenswrapper[4706]: I1125 11:58:47.065387 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 11:58:47 crc kubenswrapper[4706]: I1125 11:58:47.065799 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 11:58:47 crc kubenswrapper[4706]: I1125 11:58:47.065817 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 11:58:47 crc kubenswrapper[4706]: I1125 11:58:47.068660 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 11:58:47 crc kubenswrapper[4706]: I1125 11:58:47.069453 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 11:58:47 crc kubenswrapper[4706]: I1125 11:58:47.278127 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-d789x"] Nov 25 11:58:47 crc kubenswrapper[4706]: I1125 11:58:47.282202 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-d789x" Nov 25 11:58:47 crc kubenswrapper[4706]: I1125 11:58:47.300866 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-d789x"] Nov 25 11:58:47 crc kubenswrapper[4706]: I1125 11:58:47.385741 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2fa42f2c-560b-4494-9cce-6389eae6be11-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-d789x\" (UID: \"2fa42f2c-560b-4494-9cce-6389eae6be11\") " pod="openstack/dnsmasq-dns-89c5cd4d5-d789x" Nov 25 11:58:47 crc kubenswrapper[4706]: I1125 11:58:47.386851 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fa42f2c-560b-4494-9cce-6389eae6be11-config\") pod \"dnsmasq-dns-89c5cd4d5-d789x\" (UID: \"2fa42f2c-560b-4494-9cce-6389eae6be11\") " pod="openstack/dnsmasq-dns-89c5cd4d5-d789x" Nov 25 11:58:47 crc kubenswrapper[4706]: I1125 11:58:47.386900 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2fa42f2c-560b-4494-9cce-6389eae6be11-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-d789x\" (UID: \"2fa42f2c-560b-4494-9cce-6389eae6be11\") " pod="openstack/dnsmasq-dns-89c5cd4d5-d789x" Nov 25 11:58:47 crc kubenswrapper[4706]: I1125 11:58:47.387037 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2fa42f2c-560b-4494-9cce-6389eae6be11-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-d789x\" (UID: \"2fa42f2c-560b-4494-9cce-6389eae6be11\") " pod="openstack/dnsmasq-dns-89c5cd4d5-d789x" Nov 25 11:58:47 crc kubenswrapper[4706]: I1125 11:58:47.387085 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2fa42f2c-560b-4494-9cce-6389eae6be11-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-d789x\" (UID: \"2fa42f2c-560b-4494-9cce-6389eae6be11\") " pod="openstack/dnsmasq-dns-89c5cd4d5-d789x" Nov 25 11:58:47 crc kubenswrapper[4706]: I1125 11:58:47.387384 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt9v7\" (UniqueName: \"kubernetes.io/projected/2fa42f2c-560b-4494-9cce-6389eae6be11-kube-api-access-zt9v7\") pod \"dnsmasq-dns-89c5cd4d5-d789x\" (UID: \"2fa42f2c-560b-4494-9cce-6389eae6be11\") " pod="openstack/dnsmasq-dns-89c5cd4d5-d789x" Nov 25 11:58:47 crc kubenswrapper[4706]: I1125 11:58:47.490011 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2fa42f2c-560b-4494-9cce-6389eae6be11-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-d789x\" (UID: \"2fa42f2c-560b-4494-9cce-6389eae6be11\") " pod="openstack/dnsmasq-dns-89c5cd4d5-d789x" Nov 25 11:58:47 crc kubenswrapper[4706]: I1125 11:58:47.490076 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fa42f2c-560b-4494-9cce-6389eae6be11-config\") pod \"dnsmasq-dns-89c5cd4d5-d789x\" (UID: \"2fa42f2c-560b-4494-9cce-6389eae6be11\") " pod="openstack/dnsmasq-dns-89c5cd4d5-d789x" Nov 25 11:58:47 crc kubenswrapper[4706]: I1125 11:58:47.490104 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2fa42f2c-560b-4494-9cce-6389eae6be11-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-d789x\" (UID: \"2fa42f2c-560b-4494-9cce-6389eae6be11\") " pod="openstack/dnsmasq-dns-89c5cd4d5-d789x" Nov 25 11:58:47 crc kubenswrapper[4706]: I1125 11:58:47.490133 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2fa42f2c-560b-4494-9cce-6389eae6be11-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-d789x\" (UID: \"2fa42f2c-560b-4494-9cce-6389eae6be11\") " pod="openstack/dnsmasq-dns-89c5cd4d5-d789x" Nov 25 11:58:47 crc kubenswrapper[4706]: I1125 11:58:47.490161 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2fa42f2c-560b-4494-9cce-6389eae6be11-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-d789x\" (UID: \"2fa42f2c-560b-4494-9cce-6389eae6be11\") " pod="openstack/dnsmasq-dns-89c5cd4d5-d789x" Nov 25 11:58:47 crc kubenswrapper[4706]: I1125 11:58:47.490221 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt9v7\" (UniqueName: \"kubernetes.io/projected/2fa42f2c-560b-4494-9cce-6389eae6be11-kube-api-access-zt9v7\") pod \"dnsmasq-dns-89c5cd4d5-d789x\" (UID: \"2fa42f2c-560b-4494-9cce-6389eae6be11\") " pod="openstack/dnsmasq-dns-89c5cd4d5-d789x" Nov 25 11:58:47 crc kubenswrapper[4706]: I1125 11:58:47.491709 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2fa42f2c-560b-4494-9cce-6389eae6be11-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-d789x\" (UID: \"2fa42f2c-560b-4494-9cce-6389eae6be11\") " pod="openstack/dnsmasq-dns-89c5cd4d5-d789x" Nov 25 11:58:47 crc kubenswrapper[4706]: I1125 11:58:47.492529 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fa42f2c-560b-4494-9cce-6389eae6be11-config\") pod \"dnsmasq-dns-89c5cd4d5-d789x\" (UID: \"2fa42f2c-560b-4494-9cce-6389eae6be11\") " pod="openstack/dnsmasq-dns-89c5cd4d5-d789x" Nov 25 11:58:47 crc kubenswrapper[4706]: I1125 11:58:47.493415 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2fa42f2c-560b-4494-9cce-6389eae6be11-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-d789x\" (UID: \"2fa42f2c-560b-4494-9cce-6389eae6be11\") " pod="openstack/dnsmasq-dns-89c5cd4d5-d789x" Nov 25 11:58:47 crc kubenswrapper[4706]: I1125 11:58:47.493747 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2fa42f2c-560b-4494-9cce-6389eae6be11-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-d789x\" (UID: \"2fa42f2c-560b-4494-9cce-6389eae6be11\") " pod="openstack/dnsmasq-dns-89c5cd4d5-d789x" Nov 25 11:58:47 crc kubenswrapper[4706]: I1125 11:58:47.495077 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2fa42f2c-560b-4494-9cce-6389eae6be11-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-d789x\" (UID: \"2fa42f2c-560b-4494-9cce-6389eae6be11\") " pod="openstack/dnsmasq-dns-89c5cd4d5-d789x" Nov 25 11:58:47 crc kubenswrapper[4706]: I1125 11:58:47.520106 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt9v7\" (UniqueName: \"kubernetes.io/projected/2fa42f2c-560b-4494-9cce-6389eae6be11-kube-api-access-zt9v7\") pod \"dnsmasq-dns-89c5cd4d5-d789x\" (UID: \"2fa42f2c-560b-4494-9cce-6389eae6be11\") " pod="openstack/dnsmasq-dns-89c5cd4d5-d789x" Nov 25 11:58:47 crc kubenswrapper[4706]: I1125 11:58:47.623174 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-d789x" Nov 25 11:58:47 crc kubenswrapper[4706]: I1125 11:58:47.843845 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"562e456e-a719-47cb-b220-06ccb6fc06cc","Type":"ContainerStarted","Data":"b19d557b85343ba48b67dd2b020df89a365a514222d89a4a3596bcb3427dde78"} Nov 25 11:58:47 crc kubenswrapper[4706]: I1125 11:58:47.868782 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.868764 podStartE2EDuration="2.868764s" podCreationTimestamp="2025-11-25 11:58:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:58:47.860111942 +0000 UTC m=+1336.774669333" watchObservedRunningTime="2025-11-25 11:58:47.868764 +0000 UTC m=+1336.783321381" Nov 25 11:58:48 crc kubenswrapper[4706]: I1125 11:58:48.142613 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-d789x"] Nov 25 11:58:48 crc kubenswrapper[4706]: I1125 11:58:48.853730 4706 generic.go:334] "Generic (PLEG): container finished" podID="2fa42f2c-560b-4494-9cce-6389eae6be11" containerID="0fbe29625555e82fec4c94d886c69dd38821e23b2e5893f52416c35186c28850" exitCode=0 Nov 25 11:58:48 crc kubenswrapper[4706]: I1125 11:58:48.853832 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-d789x" event={"ID":"2fa42f2c-560b-4494-9cce-6389eae6be11","Type":"ContainerDied","Data":"0fbe29625555e82fec4c94d886c69dd38821e23b2e5893f52416c35186c28850"} Nov 25 11:58:48 crc kubenswrapper[4706]: I1125 11:58:48.853884 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-d789x" event={"ID":"2fa42f2c-560b-4494-9cce-6389eae6be11","Type":"ContainerStarted","Data":"a16cdf0325352b68b60183b9b0f477adb2de38423cd622432d5e03a789b197c9"} Nov 25 11:58:49 crc kubenswrapper[4706]: I1125 11:58:49.239234 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:58:49 crc kubenswrapper[4706]: I1125 11:58:49.239887 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c29287a1-7481-405e-8641-8300768eb2cb" containerName="ceilometer-central-agent" containerID="cri-o://88b1fa76bc4a05b1d800094737e0d8450adb0cdde2f2103ccfe40dd18350602f" gracePeriod=30 Nov 25 11:58:49 crc kubenswrapper[4706]: I1125 11:58:49.240589 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c29287a1-7481-405e-8641-8300768eb2cb" containerName="proxy-httpd" containerID="cri-o://a0ed0180e0bbb373b25e70abfbd1001a1ef3b5e5ef924cb4b8e0cd29801a4c53" gracePeriod=30 Nov 25 11:58:49 crc kubenswrapper[4706]: I1125 11:58:49.240639 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c29287a1-7481-405e-8641-8300768eb2cb" containerName="sg-core" containerID="cri-o://b6e967141e0e69251d543ef222085c21a8d97a814100faa261dd97704a4004e4" gracePeriod=30 Nov 25 11:58:49 crc kubenswrapper[4706]: I1125 11:58:49.240804 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c29287a1-7481-405e-8641-8300768eb2cb" containerName="ceilometer-notification-agent" containerID="cri-o://d02ccd3ede20522a7a1b48fd7ce7fa9ce2ad19d4f049fac66027a2ac47f8d096" gracePeriod=30 Nov 25 11:58:49 crc kubenswrapper[4706]: I1125 11:58:49.251513 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="c29287a1-7481-405e-8641-8300768eb2cb" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.194:3000/\": read tcp 10.217.0.2:54492->10.217.0.194:3000: read: connection reset by peer" Nov 25 11:58:49 crc kubenswrapper[4706]: I1125 11:58:49.864329 4706 generic.go:334] "Generic (PLEG): container finished" podID="c29287a1-7481-405e-8641-8300768eb2cb" containerID="a0ed0180e0bbb373b25e70abfbd1001a1ef3b5e5ef924cb4b8e0cd29801a4c53" exitCode=0 Nov 25 11:58:49 crc kubenswrapper[4706]: I1125 11:58:49.864365 4706 generic.go:334] "Generic (PLEG): container finished" podID="c29287a1-7481-405e-8641-8300768eb2cb" containerID="b6e967141e0e69251d543ef222085c21a8d97a814100faa261dd97704a4004e4" exitCode=2 Nov 25 11:58:49 crc kubenswrapper[4706]: I1125 11:58:49.864373 4706 generic.go:334] "Generic (PLEG): container finished" podID="c29287a1-7481-405e-8641-8300768eb2cb" containerID="88b1fa76bc4a05b1d800094737e0d8450adb0cdde2f2103ccfe40dd18350602f" exitCode=0 Nov 25 11:58:49 crc kubenswrapper[4706]: I1125 11:58:49.864404 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c29287a1-7481-405e-8641-8300768eb2cb","Type":"ContainerDied","Data":"a0ed0180e0bbb373b25e70abfbd1001a1ef3b5e5ef924cb4b8e0cd29801a4c53"} Nov 25 11:58:49 crc kubenswrapper[4706]: I1125 11:58:49.864451 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c29287a1-7481-405e-8641-8300768eb2cb","Type":"ContainerDied","Data":"b6e967141e0e69251d543ef222085c21a8d97a814100faa261dd97704a4004e4"} Nov 25 11:58:49 crc kubenswrapper[4706]: I1125 11:58:49.864463 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c29287a1-7481-405e-8641-8300768eb2cb","Type":"ContainerDied","Data":"88b1fa76bc4a05b1d800094737e0d8450adb0cdde2f2103ccfe40dd18350602f"} Nov 25 11:58:49 crc kubenswrapper[4706]: I1125 11:58:49.866829 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-d789x" event={"ID":"2fa42f2c-560b-4494-9cce-6389eae6be11","Type":"ContainerStarted","Data":"7518f11f8c9365e67b6a8e516cb7efa9ec0eabeb14fc12451786d58497e93db6"} Nov 25 11:58:49 crc kubenswrapper[4706]: I1125 11:58:49.867214 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-d789x" Nov 25 11:58:49 crc kubenswrapper[4706]: I1125 11:58:49.890597 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-d789x" podStartSLOduration=2.89057925 podStartE2EDuration="2.89057925s" podCreationTimestamp="2025-11-25 11:58:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:58:49.884579609 +0000 UTC m=+1338.799136990" watchObservedRunningTime="2025-11-25 11:58:49.89057925 +0000 UTC m=+1338.805136631" Nov 25 11:58:50 crc kubenswrapper[4706]: I1125 11:58:50.329807 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 11:58:50 crc kubenswrapper[4706]: I1125 11:58:50.330034 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7684ae52-10e0-4b84-a8aa-9f5e744b681c" containerName="nova-api-log" containerID="cri-o://99efbe8098bf623b67e50576be8330f62829843e5249a1ba174bb70397214b69" gracePeriod=30 Nov 25 11:58:50 crc kubenswrapper[4706]: I1125 11:58:50.330133 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7684ae52-10e0-4b84-a8aa-9f5e744b681c" containerName="nova-api-api" containerID="cri-o://d768e616411dcdb6bd2fc471582c1976a7fac18d1247eba3676c8623b8d1ec65" gracePeriod=30 Nov 25 11:58:50 crc kubenswrapper[4706]: I1125 11:58:50.878267 4706 generic.go:334] "Generic (PLEG): container finished" podID="c29287a1-7481-405e-8641-8300768eb2cb" containerID="d02ccd3ede20522a7a1b48fd7ce7fa9ce2ad19d4f049fac66027a2ac47f8d096" exitCode=0 Nov 25 11:58:50 crc kubenswrapper[4706]: I1125 11:58:50.878704 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c29287a1-7481-405e-8641-8300768eb2cb","Type":"ContainerDied","Data":"d02ccd3ede20522a7a1b48fd7ce7fa9ce2ad19d4f049fac66027a2ac47f8d096"} Nov 25 11:58:50 crc kubenswrapper[4706]: I1125 11:58:50.880404 4706 generic.go:334] "Generic (PLEG): container finished" podID="7684ae52-10e0-4b84-a8aa-9f5e744b681c" containerID="99efbe8098bf623b67e50576be8330f62829843e5249a1ba174bb70397214b69" exitCode=143 Nov 25 11:58:50 crc kubenswrapper[4706]: I1125 11:58:50.881679 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7684ae52-10e0-4b84-a8aa-9f5e744b681c","Type":"ContainerDied","Data":"99efbe8098bf623b67e50576be8330f62829843e5249a1ba174bb70397214b69"} Nov 25 11:58:50 crc kubenswrapper[4706]: I1125 11:58:50.992518 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.168514 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c29287a1-7481-405e-8641-8300768eb2cb-scripts\") pod \"c29287a1-7481-405e-8641-8300768eb2cb\" (UID: \"c29287a1-7481-405e-8641-8300768eb2cb\") " Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.170018 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4rzk\" (UniqueName: \"kubernetes.io/projected/c29287a1-7481-405e-8641-8300768eb2cb-kube-api-access-m4rzk\") pod \"c29287a1-7481-405e-8641-8300768eb2cb\" (UID: \"c29287a1-7481-405e-8641-8300768eb2cb\") " Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.170059 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c29287a1-7481-405e-8641-8300768eb2cb-config-data\") pod \"c29287a1-7481-405e-8641-8300768eb2cb\" (UID: \"c29287a1-7481-405e-8641-8300768eb2cb\") " Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.170132 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c29287a1-7481-405e-8641-8300768eb2cb-run-httpd\") pod \"c29287a1-7481-405e-8641-8300768eb2cb\" (UID: \"c29287a1-7481-405e-8641-8300768eb2cb\") " Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.170155 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c29287a1-7481-405e-8641-8300768eb2cb-log-httpd\") pod \"c29287a1-7481-405e-8641-8300768eb2cb\" (UID: \"c29287a1-7481-405e-8641-8300768eb2cb\") " Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.170249 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c29287a1-7481-405e-8641-8300768eb2cb-ceilometer-tls-certs\") pod \"c29287a1-7481-405e-8641-8300768eb2cb\" (UID: \"c29287a1-7481-405e-8641-8300768eb2cb\") " Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.170420 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c29287a1-7481-405e-8641-8300768eb2cb-sg-core-conf-yaml\") pod \"c29287a1-7481-405e-8641-8300768eb2cb\" (UID: \"c29287a1-7481-405e-8641-8300768eb2cb\") " Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.170448 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29287a1-7481-405e-8641-8300768eb2cb-combined-ca-bundle\") pod \"c29287a1-7481-405e-8641-8300768eb2cb\" (UID: \"c29287a1-7481-405e-8641-8300768eb2cb\") " Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.170585 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c29287a1-7481-405e-8641-8300768eb2cb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c29287a1-7481-405e-8641-8300768eb2cb" (UID: "c29287a1-7481-405e-8641-8300768eb2cb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.170717 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c29287a1-7481-405e-8641-8300768eb2cb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c29287a1-7481-405e-8641-8300768eb2cb" (UID: "c29287a1-7481-405e-8641-8300768eb2cb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.171224 4706 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c29287a1-7481-405e-8641-8300768eb2cb-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.171252 4706 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c29287a1-7481-405e-8641-8300768eb2cb-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.175878 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c29287a1-7481-405e-8641-8300768eb2cb-kube-api-access-m4rzk" (OuterVolumeSpecName: "kube-api-access-m4rzk") pod "c29287a1-7481-405e-8641-8300768eb2cb" (UID: "c29287a1-7481-405e-8641-8300768eb2cb"). InnerVolumeSpecName "kube-api-access-m4rzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.176007 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29287a1-7481-405e-8641-8300768eb2cb-scripts" (OuterVolumeSpecName: "scripts") pod "c29287a1-7481-405e-8641-8300768eb2cb" (UID: "c29287a1-7481-405e-8641-8300768eb2cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.198575 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.221547 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29287a1-7481-405e-8641-8300768eb2cb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c29287a1-7481-405e-8641-8300768eb2cb" (UID: "c29287a1-7481-405e-8641-8300768eb2cb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.240460 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29287a1-7481-405e-8641-8300768eb2cb-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c29287a1-7481-405e-8641-8300768eb2cb" (UID: "c29287a1-7481-405e-8641-8300768eb2cb"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.260432 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29287a1-7481-405e-8641-8300768eb2cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c29287a1-7481-405e-8641-8300768eb2cb" (UID: "c29287a1-7481-405e-8641-8300768eb2cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.273174 4706 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c29287a1-7481-405e-8641-8300768eb2cb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.273229 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29287a1-7481-405e-8641-8300768eb2cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.273245 4706 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c29287a1-7481-405e-8641-8300768eb2cb-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.273258 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4rzk\" (UniqueName: \"kubernetes.io/projected/c29287a1-7481-405e-8641-8300768eb2cb-kube-api-access-m4rzk\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.273277 4706 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c29287a1-7481-405e-8641-8300768eb2cb-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.283881 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29287a1-7481-405e-8641-8300768eb2cb-config-data" (OuterVolumeSpecName: "config-data") pod "c29287a1-7481-405e-8641-8300768eb2cb" (UID: "c29287a1-7481-405e-8641-8300768eb2cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.375885 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c29287a1-7481-405e-8641-8300768eb2cb-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.890643 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c29287a1-7481-405e-8641-8300768eb2cb","Type":"ContainerDied","Data":"10b00a82d464de25b8381007a1d1f44a6ebf889cf25d9ff5fef250e5065be2c5"} Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.891747 4706 scope.go:117] "RemoveContainer" containerID="a0ed0180e0bbb373b25e70abfbd1001a1ef3b5e5ef924cb4b8e0cd29801a4c53" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.890720 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.913508 4706 scope.go:117] "RemoveContainer" containerID="b6e967141e0e69251d543ef222085c21a8d97a814100faa261dd97704a4004e4" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.953536 4706 scope.go:117] "RemoveContainer" containerID="d02ccd3ede20522a7a1b48fd7ce7fa9ce2ad19d4f049fac66027a2ac47f8d096" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.967165 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.967214 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.975486 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:58:51 crc kubenswrapper[4706]: E1125 11:58:51.975878 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c29287a1-7481-405e-8641-8300768eb2cb" containerName="proxy-httpd" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.975899 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="c29287a1-7481-405e-8641-8300768eb2cb" containerName="proxy-httpd" Nov 25 11:58:51 crc kubenswrapper[4706]: E1125 11:58:51.975915 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c29287a1-7481-405e-8641-8300768eb2cb" containerName="sg-core" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.975920 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="c29287a1-7481-405e-8641-8300768eb2cb" containerName="sg-core" Nov 25 11:58:51 crc kubenswrapper[4706]: E1125 11:58:51.975934 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c29287a1-7481-405e-8641-8300768eb2cb" containerName="ceilometer-central-agent" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.975941 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="c29287a1-7481-405e-8641-8300768eb2cb" containerName="ceilometer-central-agent" Nov 25 11:58:51 crc kubenswrapper[4706]: E1125 11:58:51.975961 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c29287a1-7481-405e-8641-8300768eb2cb" containerName="ceilometer-notification-agent" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.975967 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="c29287a1-7481-405e-8641-8300768eb2cb" containerName="ceilometer-notification-agent" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.976140 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="c29287a1-7481-405e-8641-8300768eb2cb" containerName="sg-core" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.976149 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="c29287a1-7481-405e-8641-8300768eb2cb" containerName="ceilometer-notification-agent" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.976162 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="c29287a1-7481-405e-8641-8300768eb2cb" containerName="ceilometer-central-agent" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.976170 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="c29287a1-7481-405e-8641-8300768eb2cb" containerName="proxy-httpd" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.977888 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.980622 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.981798 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.981832 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.986506 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.989727 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21b25aa3-3ad7-4def-a817-6b7191924b4f-scripts\") pod \"ceilometer-0\" (UID: \"21b25aa3-3ad7-4def-a817-6b7191924b4f\") " pod="openstack/ceilometer-0" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.989816 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhq7m\" (UniqueName: \"kubernetes.io/projected/21b25aa3-3ad7-4def-a817-6b7191924b4f-kube-api-access-lhq7m\") pod \"ceilometer-0\" (UID: \"21b25aa3-3ad7-4def-a817-6b7191924b4f\") " pod="openstack/ceilometer-0" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.989901 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21b25aa3-3ad7-4def-a817-6b7191924b4f-config-data\") pod \"ceilometer-0\" (UID: \"21b25aa3-3ad7-4def-a817-6b7191924b4f\") " pod="openstack/ceilometer-0" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.989945 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/21b25aa3-3ad7-4def-a817-6b7191924b4f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"21b25aa3-3ad7-4def-a817-6b7191924b4f\") " pod="openstack/ceilometer-0" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.990022 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21b25aa3-3ad7-4def-a817-6b7191924b4f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"21b25aa3-3ad7-4def-a817-6b7191924b4f\") " pod="openstack/ceilometer-0" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.990108 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21b25aa3-3ad7-4def-a817-6b7191924b4f-run-httpd\") pod \"ceilometer-0\" (UID: \"21b25aa3-3ad7-4def-a817-6b7191924b4f\") " pod="openstack/ceilometer-0" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.990150 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21b25aa3-3ad7-4def-a817-6b7191924b4f-log-httpd\") pod \"ceilometer-0\" (UID: \"21b25aa3-3ad7-4def-a817-6b7191924b4f\") " pod="openstack/ceilometer-0" Nov 25 11:58:51 crc kubenswrapper[4706]: I1125 11:58:51.990202 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21b25aa3-3ad7-4def-a817-6b7191924b4f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"21b25aa3-3ad7-4def-a817-6b7191924b4f\") " pod="openstack/ceilometer-0" Nov 25 11:58:52 crc kubenswrapper[4706]: I1125 11:58:52.005948 4706 scope.go:117] "RemoveContainer" containerID="88b1fa76bc4a05b1d800094737e0d8450adb0cdde2f2103ccfe40dd18350602f" Nov 25 11:58:52 crc kubenswrapper[4706]: I1125 11:58:52.092195 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21b25aa3-3ad7-4def-a817-6b7191924b4f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"21b25aa3-3ad7-4def-a817-6b7191924b4f\") " pod="openstack/ceilometer-0" Nov 25 11:58:52 crc kubenswrapper[4706]: I1125 11:58:52.092323 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21b25aa3-3ad7-4def-a817-6b7191924b4f-run-httpd\") pod \"ceilometer-0\" (UID: \"21b25aa3-3ad7-4def-a817-6b7191924b4f\") " pod="openstack/ceilometer-0" Nov 25 11:58:52 crc kubenswrapper[4706]: I1125 11:58:52.092364 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21b25aa3-3ad7-4def-a817-6b7191924b4f-log-httpd\") pod \"ceilometer-0\" (UID: \"21b25aa3-3ad7-4def-a817-6b7191924b4f\") " pod="openstack/ceilometer-0" Nov 25 11:58:52 crc kubenswrapper[4706]: I1125 11:58:52.092405 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21b25aa3-3ad7-4def-a817-6b7191924b4f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"21b25aa3-3ad7-4def-a817-6b7191924b4f\") " pod="openstack/ceilometer-0" Nov 25 11:58:52 crc kubenswrapper[4706]: I1125 11:58:52.092452 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21b25aa3-3ad7-4def-a817-6b7191924b4f-scripts\") pod \"ceilometer-0\" (UID: \"21b25aa3-3ad7-4def-a817-6b7191924b4f\") " pod="openstack/ceilometer-0" Nov 25 11:58:52 crc kubenswrapper[4706]: I1125 11:58:52.092501 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhq7m\" (UniqueName: \"kubernetes.io/projected/21b25aa3-3ad7-4def-a817-6b7191924b4f-kube-api-access-lhq7m\") pod \"ceilometer-0\" (UID: \"21b25aa3-3ad7-4def-a817-6b7191924b4f\") " pod="openstack/ceilometer-0" Nov 25 11:58:52 crc kubenswrapper[4706]: I1125 11:58:52.092559 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21b25aa3-3ad7-4def-a817-6b7191924b4f-config-data\") pod \"ceilometer-0\" (UID: \"21b25aa3-3ad7-4def-a817-6b7191924b4f\") " pod="openstack/ceilometer-0" Nov 25 11:58:52 crc kubenswrapper[4706]: I1125 11:58:52.092593 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/21b25aa3-3ad7-4def-a817-6b7191924b4f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"21b25aa3-3ad7-4def-a817-6b7191924b4f\") " pod="openstack/ceilometer-0" Nov 25 11:58:52 crc kubenswrapper[4706]: I1125 11:58:52.092936 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21b25aa3-3ad7-4def-a817-6b7191924b4f-run-httpd\") pod \"ceilometer-0\" (UID: \"21b25aa3-3ad7-4def-a817-6b7191924b4f\") " pod="openstack/ceilometer-0" Nov 25 11:58:52 crc kubenswrapper[4706]: I1125 11:58:52.093439 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21b25aa3-3ad7-4def-a817-6b7191924b4f-log-httpd\") pod \"ceilometer-0\" (UID: \"21b25aa3-3ad7-4def-a817-6b7191924b4f\") " pod="openstack/ceilometer-0" Nov 25 11:58:52 crc kubenswrapper[4706]: I1125 11:58:52.097995 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21b25aa3-3ad7-4def-a817-6b7191924b4f-scripts\") pod \"ceilometer-0\" (UID: \"21b25aa3-3ad7-4def-a817-6b7191924b4f\") " pod="openstack/ceilometer-0" Nov 25 11:58:52 crc kubenswrapper[4706]: I1125 11:58:52.098607 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/21b25aa3-3ad7-4def-a817-6b7191924b4f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"21b25aa3-3ad7-4def-a817-6b7191924b4f\") " pod="openstack/ceilometer-0" Nov 25 11:58:52 crc kubenswrapper[4706]: I1125 11:58:52.099021 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21b25aa3-3ad7-4def-a817-6b7191924b4f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"21b25aa3-3ad7-4def-a817-6b7191924b4f\") " pod="openstack/ceilometer-0" Nov 25 11:58:52 crc kubenswrapper[4706]: I1125 11:58:52.100793 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21b25aa3-3ad7-4def-a817-6b7191924b4f-config-data\") pod \"ceilometer-0\" (UID: \"21b25aa3-3ad7-4def-a817-6b7191924b4f\") " pod="openstack/ceilometer-0" Nov 25 11:58:52 crc kubenswrapper[4706]: I1125 11:58:52.102914 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21b25aa3-3ad7-4def-a817-6b7191924b4f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"21b25aa3-3ad7-4def-a817-6b7191924b4f\") " pod="openstack/ceilometer-0" Nov 25 11:58:52 crc kubenswrapper[4706]: I1125 11:58:52.110707 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhq7m\" (UniqueName: \"kubernetes.io/projected/21b25aa3-3ad7-4def-a817-6b7191924b4f-kube-api-access-lhq7m\") pod \"ceilometer-0\" (UID: \"21b25aa3-3ad7-4def-a817-6b7191924b4f\") " pod="openstack/ceilometer-0" Nov 25 11:58:52 crc kubenswrapper[4706]: I1125 11:58:52.311023 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 11:58:52 crc kubenswrapper[4706]: I1125 11:58:52.782514 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:58:52 crc kubenswrapper[4706]: W1125 11:58:52.784957 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21b25aa3_3ad7_4def_a817_6b7191924b4f.slice/crio-8757d507f5ca571446ec9c8025e62cf0243552ae843c90931b3082f869480360 WatchSource:0}: Error finding container 8757d507f5ca571446ec9c8025e62cf0243552ae843c90931b3082f869480360: Status 404 returned error can't find the container with id 8757d507f5ca571446ec9c8025e62cf0243552ae843c90931b3082f869480360 Nov 25 11:58:52 crc kubenswrapper[4706]: I1125 11:58:52.787659 4706 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 11:58:52 crc kubenswrapper[4706]: I1125 11:58:52.903701 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21b25aa3-3ad7-4def-a817-6b7191924b4f","Type":"ContainerStarted","Data":"8757d507f5ca571446ec9c8025e62cf0243552ae843c90931b3082f869480360"} Nov 25 11:58:52 crc kubenswrapper[4706]: I1125 11:58:52.948455 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:58:53 crc kubenswrapper[4706]: I1125 11:58:53.923566 4706 generic.go:334] "Generic (PLEG): container finished" podID="7684ae52-10e0-4b84-a8aa-9f5e744b681c" containerID="d768e616411dcdb6bd2fc471582c1976a7fac18d1247eba3676c8623b8d1ec65" exitCode=0 Nov 25 11:58:53 crc kubenswrapper[4706]: I1125 11:58:53.933833 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c29287a1-7481-405e-8641-8300768eb2cb" path="/var/lib/kubelet/pods/c29287a1-7481-405e-8641-8300768eb2cb/volumes" Nov 25 11:58:53 crc kubenswrapper[4706]: I1125 11:58:53.934743 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7684ae52-10e0-4b84-a8aa-9f5e744b681c","Type":"ContainerDied","Data":"d768e616411dcdb6bd2fc471582c1976a7fac18d1247eba3676c8623b8d1ec65"} Nov 25 11:58:53 crc kubenswrapper[4706]: I1125 11:58:53.934776 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7684ae52-10e0-4b84-a8aa-9f5e744b681c","Type":"ContainerDied","Data":"d82bf15982a99a1967dd99c9bee1e0087ba032281bdc5257f76b0c09a9769364"} Nov 25 11:58:53 crc kubenswrapper[4706]: I1125 11:58:53.934786 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d82bf15982a99a1967dd99c9bee1e0087ba032281bdc5257f76b0c09a9769364" Nov 25 11:58:53 crc kubenswrapper[4706]: I1125 11:58:53.934797 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21b25aa3-3ad7-4def-a817-6b7191924b4f","Type":"ContainerStarted","Data":"04b034f142918d3e1808fd476452126891202190a6b2b20c9b61ca96cfb6b9bb"} Nov 25 11:58:53 crc kubenswrapper[4706]: I1125 11:58:53.947911 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 11:58:54 crc kubenswrapper[4706]: I1125 11:58:54.054264 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7684ae52-10e0-4b84-a8aa-9f5e744b681c-combined-ca-bundle\") pod \"7684ae52-10e0-4b84-a8aa-9f5e744b681c\" (UID: \"7684ae52-10e0-4b84-a8aa-9f5e744b681c\") " Nov 25 11:58:54 crc kubenswrapper[4706]: I1125 11:58:54.054389 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7684ae52-10e0-4b84-a8aa-9f5e744b681c-logs\") pod \"7684ae52-10e0-4b84-a8aa-9f5e744b681c\" (UID: \"7684ae52-10e0-4b84-a8aa-9f5e744b681c\") " Nov 25 11:58:54 crc kubenswrapper[4706]: I1125 11:58:54.054431 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7684ae52-10e0-4b84-a8aa-9f5e744b681c-config-data\") pod \"7684ae52-10e0-4b84-a8aa-9f5e744b681c\" (UID: \"7684ae52-10e0-4b84-a8aa-9f5e744b681c\") " Nov 25 11:58:54 crc kubenswrapper[4706]: I1125 11:58:54.054502 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb7p9\" (UniqueName: \"kubernetes.io/projected/7684ae52-10e0-4b84-a8aa-9f5e744b681c-kube-api-access-sb7p9\") pod \"7684ae52-10e0-4b84-a8aa-9f5e744b681c\" (UID: \"7684ae52-10e0-4b84-a8aa-9f5e744b681c\") " Nov 25 11:58:54 crc kubenswrapper[4706]: I1125 11:58:54.055239 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7684ae52-10e0-4b84-a8aa-9f5e744b681c-logs" (OuterVolumeSpecName: "logs") pod "7684ae52-10e0-4b84-a8aa-9f5e744b681c" (UID: "7684ae52-10e0-4b84-a8aa-9f5e744b681c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:58:54 crc kubenswrapper[4706]: I1125 11:58:54.058844 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7684ae52-10e0-4b84-a8aa-9f5e744b681c-kube-api-access-sb7p9" (OuterVolumeSpecName: "kube-api-access-sb7p9") pod "7684ae52-10e0-4b84-a8aa-9f5e744b681c" (UID: "7684ae52-10e0-4b84-a8aa-9f5e744b681c"). InnerVolumeSpecName "kube-api-access-sb7p9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:58:54 crc kubenswrapper[4706]: I1125 11:58:54.089023 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7684ae52-10e0-4b84-a8aa-9f5e744b681c-config-data" (OuterVolumeSpecName: "config-data") pod "7684ae52-10e0-4b84-a8aa-9f5e744b681c" (UID: "7684ae52-10e0-4b84-a8aa-9f5e744b681c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:58:54 crc kubenswrapper[4706]: I1125 11:58:54.099973 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7684ae52-10e0-4b84-a8aa-9f5e744b681c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7684ae52-10e0-4b84-a8aa-9f5e744b681c" (UID: "7684ae52-10e0-4b84-a8aa-9f5e744b681c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:58:54 crc kubenswrapper[4706]: I1125 11:58:54.157066 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7684ae52-10e0-4b84-a8aa-9f5e744b681c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:54 crc kubenswrapper[4706]: I1125 11:58:54.157101 4706 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7684ae52-10e0-4b84-a8aa-9f5e744b681c-logs\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:54 crc kubenswrapper[4706]: I1125 11:58:54.157112 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7684ae52-10e0-4b84-a8aa-9f5e744b681c-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:54 crc kubenswrapper[4706]: I1125 11:58:54.157121 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb7p9\" (UniqueName: \"kubernetes.io/projected/7684ae52-10e0-4b84-a8aa-9f5e744b681c-kube-api-access-sb7p9\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:54 crc kubenswrapper[4706]: I1125 11:58:54.946551 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 11:58:54 crc kubenswrapper[4706]: I1125 11:58:54.946770 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21b25aa3-3ad7-4def-a817-6b7191924b4f","Type":"ContainerStarted","Data":"8b44ec2b9af0bdc1b5316f4f241191e690b60ff3a3ed6c648d8266e4393c53e5"} Nov 25 11:58:54 crc kubenswrapper[4706]: I1125 11:58:54.995600 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 11:58:55 crc kubenswrapper[4706]: I1125 11:58:55.001548 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 25 11:58:55 crc kubenswrapper[4706]: I1125 11:58:55.019712 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 11:58:55 crc kubenswrapper[4706]: E1125 11:58:55.020184 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7684ae52-10e0-4b84-a8aa-9f5e744b681c" containerName="nova-api-log" Nov 25 11:58:55 crc kubenswrapper[4706]: I1125 11:58:55.020208 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="7684ae52-10e0-4b84-a8aa-9f5e744b681c" containerName="nova-api-log" Nov 25 11:58:55 crc kubenswrapper[4706]: E1125 11:58:55.020223 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7684ae52-10e0-4b84-a8aa-9f5e744b681c" containerName="nova-api-api" Nov 25 11:58:55 crc kubenswrapper[4706]: I1125 11:58:55.020230 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="7684ae52-10e0-4b84-a8aa-9f5e744b681c" containerName="nova-api-api" Nov 25 11:58:55 crc kubenswrapper[4706]: I1125 11:58:55.020521 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="7684ae52-10e0-4b84-a8aa-9f5e744b681c" containerName="nova-api-log" Nov 25 11:58:55 crc kubenswrapper[4706]: I1125 11:58:55.020557 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="7684ae52-10e0-4b84-a8aa-9f5e744b681c" containerName="nova-api-api" Nov 25 11:58:55 crc kubenswrapper[4706]: I1125 11:58:55.021717 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 11:58:55 crc kubenswrapper[4706]: I1125 11:58:55.023836 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 11:58:55 crc kubenswrapper[4706]: I1125 11:58:55.023886 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 25 11:58:55 crc kubenswrapper[4706]: I1125 11:58:55.024171 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 25 11:58:55 crc kubenswrapper[4706]: I1125 11:58:55.034709 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 11:58:55 crc kubenswrapper[4706]: I1125 11:58:55.072236 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25fe01f6-353a-43f9-a857-cd776a10c417-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"25fe01f6-353a-43f9-a857-cd776a10c417\") " pod="openstack/nova-api-0" Nov 25 11:58:55 crc kubenswrapper[4706]: I1125 11:58:55.072374 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqcjq\" (UniqueName: \"kubernetes.io/projected/25fe01f6-353a-43f9-a857-cd776a10c417-kube-api-access-tqcjq\") pod \"nova-api-0\" (UID: \"25fe01f6-353a-43f9-a857-cd776a10c417\") " pod="openstack/nova-api-0" Nov 25 11:58:55 crc kubenswrapper[4706]: I1125 11:58:55.072406 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25fe01f6-353a-43f9-a857-cd776a10c417-public-tls-certs\") pod \"nova-api-0\" (UID: \"25fe01f6-353a-43f9-a857-cd776a10c417\") " pod="openstack/nova-api-0" Nov 25 11:58:55 crc kubenswrapper[4706]: I1125 11:58:55.072593 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25fe01f6-353a-43f9-a857-cd776a10c417-logs\") pod \"nova-api-0\" (UID: \"25fe01f6-353a-43f9-a857-cd776a10c417\") " pod="openstack/nova-api-0" Nov 25 11:58:55 crc kubenswrapper[4706]: I1125 11:58:55.072745 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25fe01f6-353a-43f9-a857-cd776a10c417-config-data\") pod \"nova-api-0\" (UID: \"25fe01f6-353a-43f9-a857-cd776a10c417\") " pod="openstack/nova-api-0" Nov 25 11:58:55 crc kubenswrapper[4706]: I1125 11:58:55.072809 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25fe01f6-353a-43f9-a857-cd776a10c417-internal-tls-certs\") pod \"nova-api-0\" (UID: \"25fe01f6-353a-43f9-a857-cd776a10c417\") " pod="openstack/nova-api-0" Nov 25 11:58:55 crc kubenswrapper[4706]: I1125 11:58:55.174620 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25fe01f6-353a-43f9-a857-cd776a10c417-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"25fe01f6-353a-43f9-a857-cd776a10c417\") " pod="openstack/nova-api-0" Nov 25 11:58:55 crc kubenswrapper[4706]: I1125 11:58:55.174700 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqcjq\" (UniqueName: \"kubernetes.io/projected/25fe01f6-353a-43f9-a857-cd776a10c417-kube-api-access-tqcjq\") pod \"nova-api-0\" (UID: \"25fe01f6-353a-43f9-a857-cd776a10c417\") " pod="openstack/nova-api-0" Nov 25 11:58:55 crc kubenswrapper[4706]: I1125 11:58:55.174728 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25fe01f6-353a-43f9-a857-cd776a10c417-public-tls-certs\") pod \"nova-api-0\" (UID: \"25fe01f6-353a-43f9-a857-cd776a10c417\") " pod="openstack/nova-api-0" Nov 25 11:58:55 crc kubenswrapper[4706]: I1125 11:58:55.174763 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25fe01f6-353a-43f9-a857-cd776a10c417-logs\") pod \"nova-api-0\" (UID: \"25fe01f6-353a-43f9-a857-cd776a10c417\") " pod="openstack/nova-api-0" Nov 25 11:58:55 crc kubenswrapper[4706]: I1125 11:58:55.174803 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25fe01f6-353a-43f9-a857-cd776a10c417-config-data\") pod \"nova-api-0\" (UID: \"25fe01f6-353a-43f9-a857-cd776a10c417\") " pod="openstack/nova-api-0" Nov 25 11:58:55 crc kubenswrapper[4706]: I1125 11:58:55.174831 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25fe01f6-353a-43f9-a857-cd776a10c417-internal-tls-certs\") pod \"nova-api-0\" (UID: \"25fe01f6-353a-43f9-a857-cd776a10c417\") " pod="openstack/nova-api-0" Nov 25 11:58:55 crc kubenswrapper[4706]: I1125 11:58:55.176076 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25fe01f6-353a-43f9-a857-cd776a10c417-logs\") pod \"nova-api-0\" (UID: \"25fe01f6-353a-43f9-a857-cd776a10c417\") " pod="openstack/nova-api-0" Nov 25 11:58:55 crc kubenswrapper[4706]: I1125 11:58:55.180528 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25fe01f6-353a-43f9-a857-cd776a10c417-config-data\") pod \"nova-api-0\" (UID: \"25fe01f6-353a-43f9-a857-cd776a10c417\") " pod="openstack/nova-api-0" Nov 25 11:58:55 crc kubenswrapper[4706]: I1125 11:58:55.180560 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25fe01f6-353a-43f9-a857-cd776a10c417-public-tls-certs\") pod \"nova-api-0\" (UID: \"25fe01f6-353a-43f9-a857-cd776a10c417\") " pod="openstack/nova-api-0" Nov 25 11:58:55 crc kubenswrapper[4706]: I1125 11:58:55.180941 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25fe01f6-353a-43f9-a857-cd776a10c417-internal-tls-certs\") pod \"nova-api-0\" (UID: \"25fe01f6-353a-43f9-a857-cd776a10c417\") " pod="openstack/nova-api-0" Nov 25 11:58:55 crc kubenswrapper[4706]: I1125 11:58:55.181224 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25fe01f6-353a-43f9-a857-cd776a10c417-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"25fe01f6-353a-43f9-a857-cd776a10c417\") " pod="openstack/nova-api-0" Nov 25 11:58:55 crc kubenswrapper[4706]: I1125 11:58:55.194434 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqcjq\" (UniqueName: \"kubernetes.io/projected/25fe01f6-353a-43f9-a857-cd776a10c417-kube-api-access-tqcjq\") pod \"nova-api-0\" (UID: \"25fe01f6-353a-43f9-a857-cd776a10c417\") " pod="openstack/nova-api-0" Nov 25 11:58:55 crc kubenswrapper[4706]: I1125 11:58:55.338957 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 11:58:55 crc kubenswrapper[4706]: I1125 11:58:55.773921 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 11:58:55 crc kubenswrapper[4706]: W1125 11:58:55.783228 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25fe01f6_353a_43f9_a857_cd776a10c417.slice/crio-24abbca777f8c927c39901e065d28ffe36a97cfa98e32e39ce19b82cd9b09452 WatchSource:0}: Error finding container 24abbca777f8c927c39901e065d28ffe36a97cfa98e32e39ce19b82cd9b09452: Status 404 returned error can't find the container with id 24abbca777f8c927c39901e065d28ffe36a97cfa98e32e39ce19b82cd9b09452 Nov 25 11:58:55 crc kubenswrapper[4706]: I1125 11:58:55.936513 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7684ae52-10e0-4b84-a8aa-9f5e744b681c" path="/var/lib/kubelet/pods/7684ae52-10e0-4b84-a8aa-9f5e744b681c/volumes" Nov 25 11:58:55 crc kubenswrapper[4706]: I1125 11:58:55.956343 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"25fe01f6-353a-43f9-a857-cd776a10c417","Type":"ContainerStarted","Data":"24abbca777f8c927c39901e065d28ffe36a97cfa98e32e39ce19b82cd9b09452"} Nov 25 11:58:55 crc kubenswrapper[4706]: I1125 11:58:55.959746 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21b25aa3-3ad7-4def-a817-6b7191924b4f","Type":"ContainerStarted","Data":"71417bac8be6dd1bd83c75cdd5d64019cb1f2874f48b0783cd4ad2c14dca5dac"} Nov 25 11:58:56 crc kubenswrapper[4706]: I1125 11:58:56.198586 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 25 11:58:56 crc kubenswrapper[4706]: I1125 11:58:56.222370 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 25 11:58:57 crc kubenswrapper[4706]: I1125 11:58:57.629447 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-d789x" Nov 25 11:58:57 crc kubenswrapper[4706]: I1125 11:58:57.652568 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21b25aa3-3ad7-4def-a817-6b7191924b4f","Type":"ContainerStarted","Data":"383fd46b50aea3418784fba9b3fe85f49776b73aa519ae1e70b6b778dd71392b"} Nov 25 11:58:57 crc kubenswrapper[4706]: I1125 11:58:57.652748 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 11:58:57 crc kubenswrapper[4706]: I1125 11:58:57.652673 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="21b25aa3-3ad7-4def-a817-6b7191924b4f" containerName="sg-core" containerID="cri-o://71417bac8be6dd1bd83c75cdd5d64019cb1f2874f48b0783cd4ad2c14dca5dac" gracePeriod=30 Nov 25 11:58:57 crc kubenswrapper[4706]: I1125 11:58:57.652617 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="21b25aa3-3ad7-4def-a817-6b7191924b4f" containerName="ceilometer-central-agent" containerID="cri-o://04b034f142918d3e1808fd476452126891202190a6b2b20c9b61ca96cfb6b9bb" gracePeriod=30 Nov 25 11:58:57 crc kubenswrapper[4706]: I1125 11:58:57.652706 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="21b25aa3-3ad7-4def-a817-6b7191924b4f" containerName="ceilometer-notification-agent" containerID="cri-o://8b44ec2b9af0bdc1b5316f4f241191e690b60ff3a3ed6c648d8266e4393c53e5" gracePeriod=30 Nov 25 11:58:57 crc kubenswrapper[4706]: I1125 11:58:57.652689 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="21b25aa3-3ad7-4def-a817-6b7191924b4f" containerName="proxy-httpd" containerID="cri-o://383fd46b50aea3418784fba9b3fe85f49776b73aa519ae1e70b6b778dd71392b" gracePeriod=30 Nov 25 11:58:57 crc kubenswrapper[4706]: I1125 11:58:57.671728 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"25fe01f6-353a-43f9-a857-cd776a10c417","Type":"ContainerStarted","Data":"fdc4b6fd5d469f0949eb27b2e0ade41d05df6ef1ff13a5a4b1f5d19e96217f51"} Nov 25 11:58:57 crc kubenswrapper[4706]: I1125 11:58:57.671777 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"25fe01f6-353a-43f9-a857-cd776a10c417","Type":"ContainerStarted","Data":"2f28afb8863e4a5e84b6db9225949afa29f5ac5df130ac31c7d8dbbd16ed47c9"} Nov 25 11:58:57 crc kubenswrapper[4706]: I1125 11:58:57.754774 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.49439713 podStartE2EDuration="6.754753767s" podCreationTimestamp="2025-11-25 11:58:51 +0000 UTC" firstStartedPulling="2025-11-25 11:58:52.787425789 +0000 UTC m=+1341.701983170" lastFinishedPulling="2025-11-25 11:58:56.047782426 +0000 UTC m=+1344.962339807" observedRunningTime="2025-11-25 11:58:57.751768661 +0000 UTC m=+1346.666326042" watchObservedRunningTime="2025-11-25 11:58:57.754753767 +0000 UTC m=+1346.669311148" Nov 25 11:58:57 crc kubenswrapper[4706]: I1125 11:58:57.779778 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-sdx7j"] Nov 25 11:58:57 crc kubenswrapper[4706]: I1125 11:58:57.780024 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-sdx7j" podUID="00683e5c-17fc-450f-b2b4-7366b2c45aa5" containerName="dnsmasq-dns" containerID="cri-o://90da4e447f7329491aef2e8de9b7d3b2e05711e48c916ae2fc14256e76a9eee3" gracePeriod=10 Nov 25 11:58:57 crc kubenswrapper[4706]: I1125 11:58:57.813780 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.813760315 podStartE2EDuration="3.813760315s" podCreationTimestamp="2025-11-25 11:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:58:57.800769787 +0000 UTC m=+1346.715327168" watchObservedRunningTime="2025-11-25 11:58:57.813760315 +0000 UTC m=+1346.728317696" Nov 25 11:58:57 crc kubenswrapper[4706]: I1125 11:58:57.885734 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.239620 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-8vfzt"] Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.240868 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8vfzt" Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.242539 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.242860 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.250868 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-8vfzt"] Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.387032 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-sdx7j" Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.422334 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8vfzt\" (UID: \"0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0\") " pod="openstack/nova-cell1-cell-mapping-8vfzt" Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.422408 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0-scripts\") pod \"nova-cell1-cell-mapping-8vfzt\" (UID: \"0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0\") " pod="openstack/nova-cell1-cell-mapping-8vfzt" Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.422638 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkl9s\" (UniqueName: \"kubernetes.io/projected/0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0-kube-api-access-fkl9s\") pod \"nova-cell1-cell-mapping-8vfzt\" (UID: \"0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0\") " pod="openstack/nova-cell1-cell-mapping-8vfzt" Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.422858 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0-config-data\") pod \"nova-cell1-cell-mapping-8vfzt\" (UID: \"0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0\") " pod="openstack/nova-cell1-cell-mapping-8vfzt" Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.523898 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/00683e5c-17fc-450f-b2b4-7366b2c45aa5-dns-swift-storage-0\") pod \"00683e5c-17fc-450f-b2b4-7366b2c45aa5\" (UID: \"00683e5c-17fc-450f-b2b4-7366b2c45aa5\") " Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.524032 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00683e5c-17fc-450f-b2b4-7366b2c45aa5-config\") pod \"00683e5c-17fc-450f-b2b4-7366b2c45aa5\" (UID: \"00683e5c-17fc-450f-b2b4-7366b2c45aa5\") " Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.524101 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00683e5c-17fc-450f-b2b4-7366b2c45aa5-ovsdbserver-nb\") pod \"00683e5c-17fc-450f-b2b4-7366b2c45aa5\" (UID: \"00683e5c-17fc-450f-b2b4-7366b2c45aa5\") " Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.524141 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00683e5c-17fc-450f-b2b4-7366b2c45aa5-dns-svc\") pod \"00683e5c-17fc-450f-b2b4-7366b2c45aa5\" (UID: \"00683e5c-17fc-450f-b2b4-7366b2c45aa5\") " Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.524351 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5bld\" (UniqueName: \"kubernetes.io/projected/00683e5c-17fc-450f-b2b4-7366b2c45aa5-kube-api-access-t5bld\") pod \"00683e5c-17fc-450f-b2b4-7366b2c45aa5\" (UID: \"00683e5c-17fc-450f-b2b4-7366b2c45aa5\") " Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.524389 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00683e5c-17fc-450f-b2b4-7366b2c45aa5-ovsdbserver-sb\") pod \"00683e5c-17fc-450f-b2b4-7366b2c45aa5\" (UID: \"00683e5c-17fc-450f-b2b4-7366b2c45aa5\") " Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.525573 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8vfzt\" (UID: \"0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0\") " pod="openstack/nova-cell1-cell-mapping-8vfzt" Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.525631 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0-scripts\") pod \"nova-cell1-cell-mapping-8vfzt\" (UID: \"0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0\") " pod="openstack/nova-cell1-cell-mapping-8vfzt" Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.525720 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkl9s\" (UniqueName: \"kubernetes.io/projected/0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0-kube-api-access-fkl9s\") pod \"nova-cell1-cell-mapping-8vfzt\" (UID: \"0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0\") " pod="openstack/nova-cell1-cell-mapping-8vfzt" Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.525870 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0-config-data\") pod \"nova-cell1-cell-mapping-8vfzt\" (UID: \"0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0\") " pod="openstack/nova-cell1-cell-mapping-8vfzt" Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.534532 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00683e5c-17fc-450f-b2b4-7366b2c45aa5-kube-api-access-t5bld" (OuterVolumeSpecName: "kube-api-access-t5bld") pod "00683e5c-17fc-450f-b2b4-7366b2c45aa5" (UID: "00683e5c-17fc-450f-b2b4-7366b2c45aa5"). InnerVolumeSpecName "kube-api-access-t5bld". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.535090 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8vfzt\" (UID: \"0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0\") " pod="openstack/nova-cell1-cell-mapping-8vfzt" Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.535116 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0-scripts\") pod \"nova-cell1-cell-mapping-8vfzt\" (UID: \"0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0\") " pod="openstack/nova-cell1-cell-mapping-8vfzt" Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.535160 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0-config-data\") pod \"nova-cell1-cell-mapping-8vfzt\" (UID: \"0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0\") " pod="openstack/nova-cell1-cell-mapping-8vfzt" Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.544438 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkl9s\" (UniqueName: \"kubernetes.io/projected/0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0-kube-api-access-fkl9s\") pod \"nova-cell1-cell-mapping-8vfzt\" (UID: \"0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0\") " pod="openstack/nova-cell1-cell-mapping-8vfzt" Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.559871 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8vfzt" Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.594249 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00683e5c-17fc-450f-b2b4-7366b2c45aa5-config" (OuterVolumeSpecName: "config") pod "00683e5c-17fc-450f-b2b4-7366b2c45aa5" (UID: "00683e5c-17fc-450f-b2b4-7366b2c45aa5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.595976 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00683e5c-17fc-450f-b2b4-7366b2c45aa5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "00683e5c-17fc-450f-b2b4-7366b2c45aa5" (UID: "00683e5c-17fc-450f-b2b4-7366b2c45aa5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.597910 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00683e5c-17fc-450f-b2b4-7366b2c45aa5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "00683e5c-17fc-450f-b2b4-7366b2c45aa5" (UID: "00683e5c-17fc-450f-b2b4-7366b2c45aa5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.605612 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00683e5c-17fc-450f-b2b4-7366b2c45aa5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "00683e5c-17fc-450f-b2b4-7366b2c45aa5" (UID: "00683e5c-17fc-450f-b2b4-7366b2c45aa5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.607005 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00683e5c-17fc-450f-b2b4-7366b2c45aa5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "00683e5c-17fc-450f-b2b4-7366b2c45aa5" (UID: "00683e5c-17fc-450f-b2b4-7366b2c45aa5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.630971 4706 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00683e5c-17fc-450f-b2b4-7366b2c45aa5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.631012 4706 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00683e5c-17fc-450f-b2b4-7366b2c45aa5-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.631029 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5bld\" (UniqueName: \"kubernetes.io/projected/00683e5c-17fc-450f-b2b4-7366b2c45aa5-kube-api-access-t5bld\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.631084 4706 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00683e5c-17fc-450f-b2b4-7366b2c45aa5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.631102 4706 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/00683e5c-17fc-450f-b2b4-7366b2c45aa5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.631117 4706 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00683e5c-17fc-450f-b2b4-7366b2c45aa5-config\") on node \"crc\" DevicePath \"\"" Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.693442 4706 generic.go:334] "Generic (PLEG): container finished" podID="21b25aa3-3ad7-4def-a817-6b7191924b4f" containerID="383fd46b50aea3418784fba9b3fe85f49776b73aa519ae1e70b6b778dd71392b" exitCode=0 Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.693488 4706 generic.go:334] "Generic (PLEG): container finished" podID="21b25aa3-3ad7-4def-a817-6b7191924b4f" containerID="71417bac8be6dd1bd83c75cdd5d64019cb1f2874f48b0783cd4ad2c14dca5dac" exitCode=2 Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.693504 4706 generic.go:334] "Generic (PLEG): container finished" podID="21b25aa3-3ad7-4def-a817-6b7191924b4f" containerID="8b44ec2b9af0bdc1b5316f4f241191e690b60ff3a3ed6c648d8266e4393c53e5" exitCode=0 Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.693559 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21b25aa3-3ad7-4def-a817-6b7191924b4f","Type":"ContainerDied","Data":"383fd46b50aea3418784fba9b3fe85f49776b73aa519ae1e70b6b778dd71392b"} Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.693594 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21b25aa3-3ad7-4def-a817-6b7191924b4f","Type":"ContainerDied","Data":"71417bac8be6dd1bd83c75cdd5d64019cb1f2874f48b0783cd4ad2c14dca5dac"} Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.693605 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21b25aa3-3ad7-4def-a817-6b7191924b4f","Type":"ContainerDied","Data":"8b44ec2b9af0bdc1b5316f4f241191e690b60ff3a3ed6c648d8266e4393c53e5"} Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.697515 4706 generic.go:334] "Generic (PLEG): container finished" podID="00683e5c-17fc-450f-b2b4-7366b2c45aa5" containerID="90da4e447f7329491aef2e8de9b7d3b2e05711e48c916ae2fc14256e76a9eee3" exitCode=0 Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.697618 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-sdx7j" Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.697658 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-sdx7j" event={"ID":"00683e5c-17fc-450f-b2b4-7366b2c45aa5","Type":"ContainerDied","Data":"90da4e447f7329491aef2e8de9b7d3b2e05711e48c916ae2fc14256e76a9eee3"} Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.697693 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-sdx7j" event={"ID":"00683e5c-17fc-450f-b2b4-7366b2c45aa5","Type":"ContainerDied","Data":"b3061532f126ece1f7b1e665799c80ff911244947dc65c8d32100828556c70d4"} Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.697721 4706 scope.go:117] "RemoveContainer" containerID="90da4e447f7329491aef2e8de9b7d3b2e05711e48c916ae2fc14256e76a9eee3" Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.739181 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-sdx7j"] Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.747131 4706 scope.go:117] "RemoveContainer" containerID="f31ca09f5f303f093e6f2ed36404c2e852ba4fa5400ac58ba28965b70763ec99" Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.783716 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-sdx7j"] Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.798693 4706 scope.go:117] "RemoveContainer" containerID="90da4e447f7329491aef2e8de9b7d3b2e05711e48c916ae2fc14256e76a9eee3" Nov 25 11:58:58 crc kubenswrapper[4706]: E1125 11:58:58.804861 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90da4e447f7329491aef2e8de9b7d3b2e05711e48c916ae2fc14256e76a9eee3\": container with ID starting with 90da4e447f7329491aef2e8de9b7d3b2e05711e48c916ae2fc14256e76a9eee3 not found: ID does not exist" containerID="90da4e447f7329491aef2e8de9b7d3b2e05711e48c916ae2fc14256e76a9eee3" Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.804927 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90da4e447f7329491aef2e8de9b7d3b2e05711e48c916ae2fc14256e76a9eee3"} err="failed to get container status \"90da4e447f7329491aef2e8de9b7d3b2e05711e48c916ae2fc14256e76a9eee3\": rpc error: code = NotFound desc = could not find container \"90da4e447f7329491aef2e8de9b7d3b2e05711e48c916ae2fc14256e76a9eee3\": container with ID starting with 90da4e447f7329491aef2e8de9b7d3b2e05711e48c916ae2fc14256e76a9eee3 not found: ID does not exist" Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.804962 4706 scope.go:117] "RemoveContainer" containerID="f31ca09f5f303f093e6f2ed36404c2e852ba4fa5400ac58ba28965b70763ec99" Nov 25 11:58:58 crc kubenswrapper[4706]: E1125 11:58:58.805358 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f31ca09f5f303f093e6f2ed36404c2e852ba4fa5400ac58ba28965b70763ec99\": container with ID starting with f31ca09f5f303f093e6f2ed36404c2e852ba4fa5400ac58ba28965b70763ec99 not found: ID does not exist" containerID="f31ca09f5f303f093e6f2ed36404c2e852ba4fa5400ac58ba28965b70763ec99" Nov 25 11:58:58 crc kubenswrapper[4706]: I1125 11:58:58.805399 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f31ca09f5f303f093e6f2ed36404c2e852ba4fa5400ac58ba28965b70763ec99"} err="failed to get container status \"f31ca09f5f303f093e6f2ed36404c2e852ba4fa5400ac58ba28965b70763ec99\": rpc error: code = NotFound desc = could not find container \"f31ca09f5f303f093e6f2ed36404c2e852ba4fa5400ac58ba28965b70763ec99\": container with ID starting with f31ca09f5f303f093e6f2ed36404c2e852ba4fa5400ac58ba28965b70763ec99 not found: ID does not exist" Nov 25 11:58:59 crc kubenswrapper[4706]: I1125 11:58:59.096633 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-8vfzt"] Nov 25 11:58:59 crc kubenswrapper[4706]: I1125 11:58:59.706926 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8vfzt" event={"ID":"0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0","Type":"ContainerStarted","Data":"a341f1a73ca72b1d393cb86f7600862f027f84cc6c5a74fcd9888210c58daa4e"} Nov 25 11:58:59 crc kubenswrapper[4706]: I1125 11:58:59.707195 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8vfzt" event={"ID":"0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0","Type":"ContainerStarted","Data":"5c91a476a9ddcb0cf4b5857dc88ea202d3c072140ed3d3b30bdfb7a7b12b1606"} Nov 25 11:58:59 crc kubenswrapper[4706]: I1125 11:58:59.727579 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-8vfzt" podStartSLOduration=1.727561831 podStartE2EDuration="1.727561831s" podCreationTimestamp="2025-11-25 11:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:58:59.722451672 +0000 UTC m=+1348.637009053" watchObservedRunningTime="2025-11-25 11:58:59.727561831 +0000 UTC m=+1348.642119212" Nov 25 11:58:59 crc kubenswrapper[4706]: I1125 11:58:59.933427 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00683e5c-17fc-450f-b2b4-7366b2c45aa5" path="/var/lib/kubelet/pods/00683e5c-17fc-450f-b2b4-7366b2c45aa5/volumes" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.124823 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.125177 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.300634 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.487410 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhq7m\" (UniqueName: \"kubernetes.io/projected/21b25aa3-3ad7-4def-a817-6b7191924b4f-kube-api-access-lhq7m\") pod \"21b25aa3-3ad7-4def-a817-6b7191924b4f\" (UID: \"21b25aa3-3ad7-4def-a817-6b7191924b4f\") " Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.487481 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21b25aa3-3ad7-4def-a817-6b7191924b4f-run-httpd\") pod \"21b25aa3-3ad7-4def-a817-6b7191924b4f\" (UID: \"21b25aa3-3ad7-4def-a817-6b7191924b4f\") " Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.487565 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21b25aa3-3ad7-4def-a817-6b7191924b4f-sg-core-conf-yaml\") pod \"21b25aa3-3ad7-4def-a817-6b7191924b4f\" (UID: \"21b25aa3-3ad7-4def-a817-6b7191924b4f\") " Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.487637 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/21b25aa3-3ad7-4def-a817-6b7191924b4f-ceilometer-tls-certs\") pod \"21b25aa3-3ad7-4def-a817-6b7191924b4f\" (UID: \"21b25aa3-3ad7-4def-a817-6b7191924b4f\") " Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.487669 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21b25aa3-3ad7-4def-a817-6b7191924b4f-combined-ca-bundle\") pod \"21b25aa3-3ad7-4def-a817-6b7191924b4f\" (UID: \"21b25aa3-3ad7-4def-a817-6b7191924b4f\") " Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.487701 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21b25aa3-3ad7-4def-a817-6b7191924b4f-log-httpd\") pod \"21b25aa3-3ad7-4def-a817-6b7191924b4f\" (UID: \"21b25aa3-3ad7-4def-a817-6b7191924b4f\") " Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.487746 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21b25aa3-3ad7-4def-a817-6b7191924b4f-scripts\") pod \"21b25aa3-3ad7-4def-a817-6b7191924b4f\" (UID: \"21b25aa3-3ad7-4def-a817-6b7191924b4f\") " Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.488185 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21b25aa3-3ad7-4def-a817-6b7191924b4f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "21b25aa3-3ad7-4def-a817-6b7191924b4f" (UID: "21b25aa3-3ad7-4def-a817-6b7191924b4f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.488411 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21b25aa3-3ad7-4def-a817-6b7191924b4f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "21b25aa3-3ad7-4def-a817-6b7191924b4f" (UID: "21b25aa3-3ad7-4def-a817-6b7191924b4f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.487847 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21b25aa3-3ad7-4def-a817-6b7191924b4f-config-data\") pod \"21b25aa3-3ad7-4def-a817-6b7191924b4f\" (UID: \"21b25aa3-3ad7-4def-a817-6b7191924b4f\") " Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.489366 4706 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21b25aa3-3ad7-4def-a817-6b7191924b4f-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.489382 4706 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21b25aa3-3ad7-4def-a817-6b7191924b4f-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.503628 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21b25aa3-3ad7-4def-a817-6b7191924b4f-scripts" (OuterVolumeSpecName: "scripts") pod "21b25aa3-3ad7-4def-a817-6b7191924b4f" (UID: "21b25aa3-3ad7-4def-a817-6b7191924b4f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.503668 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21b25aa3-3ad7-4def-a817-6b7191924b4f-kube-api-access-lhq7m" (OuterVolumeSpecName: "kube-api-access-lhq7m") pod "21b25aa3-3ad7-4def-a817-6b7191924b4f" (UID: "21b25aa3-3ad7-4def-a817-6b7191924b4f"). InnerVolumeSpecName "kube-api-access-lhq7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.519198 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21b25aa3-3ad7-4def-a817-6b7191924b4f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "21b25aa3-3ad7-4def-a817-6b7191924b4f" (UID: "21b25aa3-3ad7-4def-a817-6b7191924b4f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.539646 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21b25aa3-3ad7-4def-a817-6b7191924b4f-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "21b25aa3-3ad7-4def-a817-6b7191924b4f" (UID: "21b25aa3-3ad7-4def-a817-6b7191924b4f"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.576377 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21b25aa3-3ad7-4def-a817-6b7191924b4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21b25aa3-3ad7-4def-a817-6b7191924b4f" (UID: "21b25aa3-3ad7-4def-a817-6b7191924b4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.591713 4706 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21b25aa3-3ad7-4def-a817-6b7191924b4f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.591771 4706 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/21b25aa3-3ad7-4def-a817-6b7191924b4f-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.591783 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21b25aa3-3ad7-4def-a817-6b7191924b4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.591793 4706 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21b25aa3-3ad7-4def-a817-6b7191924b4f-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.591804 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhq7m\" (UniqueName: \"kubernetes.io/projected/21b25aa3-3ad7-4def-a817-6b7191924b4f-kube-api-access-lhq7m\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.594561 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21b25aa3-3ad7-4def-a817-6b7191924b4f-config-data" (OuterVolumeSpecName: "config-data") pod "21b25aa3-3ad7-4def-a817-6b7191924b4f" (UID: "21b25aa3-3ad7-4def-a817-6b7191924b4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.693830 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21b25aa3-3ad7-4def-a817-6b7191924b4f-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.729240 4706 generic.go:334] "Generic (PLEG): container finished" podID="21b25aa3-3ad7-4def-a817-6b7191924b4f" containerID="04b034f142918d3e1808fd476452126891202190a6b2b20c9b61ca96cfb6b9bb" exitCode=0 Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.729292 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.729308 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21b25aa3-3ad7-4def-a817-6b7191924b4f","Type":"ContainerDied","Data":"04b034f142918d3e1808fd476452126891202190a6b2b20c9b61ca96cfb6b9bb"} Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.729422 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21b25aa3-3ad7-4def-a817-6b7191924b4f","Type":"ContainerDied","Data":"8757d507f5ca571446ec9c8025e62cf0243552ae843c90931b3082f869480360"} Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.729442 4706 scope.go:117] "RemoveContainer" containerID="383fd46b50aea3418784fba9b3fe85f49776b73aa519ae1e70b6b778dd71392b" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.760010 4706 scope.go:117] "RemoveContainer" containerID="71417bac8be6dd1bd83c75cdd5d64019cb1f2874f48b0783cd4ad2c14dca5dac" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.779334 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.788745 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.796458 4706 scope.go:117] "RemoveContainer" containerID="8b44ec2b9af0bdc1b5316f4f241191e690b60ff3a3ed6c648d8266e4393c53e5" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.800389 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:59:01 crc kubenswrapper[4706]: E1125 11:59:01.800764 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21b25aa3-3ad7-4def-a817-6b7191924b4f" containerName="sg-core" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.800778 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="21b25aa3-3ad7-4def-a817-6b7191924b4f" containerName="sg-core" Nov 25 11:59:01 crc kubenswrapper[4706]: E1125 11:59:01.800789 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00683e5c-17fc-450f-b2b4-7366b2c45aa5" containerName="init" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.800795 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="00683e5c-17fc-450f-b2b4-7366b2c45aa5" containerName="init" Nov 25 11:59:01 crc kubenswrapper[4706]: E1125 11:59:01.800801 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21b25aa3-3ad7-4def-a817-6b7191924b4f" containerName="ceilometer-notification-agent" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.800807 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="21b25aa3-3ad7-4def-a817-6b7191924b4f" containerName="ceilometer-notification-agent" Nov 25 11:59:01 crc kubenswrapper[4706]: E1125 11:59:01.800829 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21b25aa3-3ad7-4def-a817-6b7191924b4f" containerName="proxy-httpd" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.800835 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="21b25aa3-3ad7-4def-a817-6b7191924b4f" containerName="proxy-httpd" Nov 25 11:59:01 crc kubenswrapper[4706]: E1125 11:59:01.800849 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00683e5c-17fc-450f-b2b4-7366b2c45aa5" containerName="dnsmasq-dns" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.800858 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="00683e5c-17fc-450f-b2b4-7366b2c45aa5" containerName="dnsmasq-dns" Nov 25 11:59:01 crc kubenswrapper[4706]: E1125 11:59:01.800879 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21b25aa3-3ad7-4def-a817-6b7191924b4f" containerName="ceilometer-central-agent" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.800887 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="21b25aa3-3ad7-4def-a817-6b7191924b4f" containerName="ceilometer-central-agent" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.801069 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="21b25aa3-3ad7-4def-a817-6b7191924b4f" containerName="ceilometer-notification-agent" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.801279 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="00683e5c-17fc-450f-b2b4-7366b2c45aa5" containerName="dnsmasq-dns" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.801290 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="21b25aa3-3ad7-4def-a817-6b7191924b4f" containerName="proxy-httpd" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.801313 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="21b25aa3-3ad7-4def-a817-6b7191924b4f" containerName="ceilometer-central-agent" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.801334 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="21b25aa3-3ad7-4def-a817-6b7191924b4f" containerName="sg-core" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.803340 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.806538 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.806851 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.807021 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.814249 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.832389 4706 scope.go:117] "RemoveContainer" containerID="04b034f142918d3e1808fd476452126891202190a6b2b20c9b61ca96cfb6b9bb" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.862358 4706 scope.go:117] "RemoveContainer" containerID="383fd46b50aea3418784fba9b3fe85f49776b73aa519ae1e70b6b778dd71392b" Nov 25 11:59:01 crc kubenswrapper[4706]: E1125 11:59:01.862744 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"383fd46b50aea3418784fba9b3fe85f49776b73aa519ae1e70b6b778dd71392b\": container with ID starting with 383fd46b50aea3418784fba9b3fe85f49776b73aa519ae1e70b6b778dd71392b not found: ID does not exist" containerID="383fd46b50aea3418784fba9b3fe85f49776b73aa519ae1e70b6b778dd71392b" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.862824 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"383fd46b50aea3418784fba9b3fe85f49776b73aa519ae1e70b6b778dd71392b"} err="failed to get container status \"383fd46b50aea3418784fba9b3fe85f49776b73aa519ae1e70b6b778dd71392b\": rpc error: code = NotFound desc = could not find container \"383fd46b50aea3418784fba9b3fe85f49776b73aa519ae1e70b6b778dd71392b\": container with ID starting with 383fd46b50aea3418784fba9b3fe85f49776b73aa519ae1e70b6b778dd71392b not found: ID does not exist" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.862858 4706 scope.go:117] "RemoveContainer" containerID="71417bac8be6dd1bd83c75cdd5d64019cb1f2874f48b0783cd4ad2c14dca5dac" Nov 25 11:59:01 crc kubenswrapper[4706]: E1125 11:59:01.863286 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71417bac8be6dd1bd83c75cdd5d64019cb1f2874f48b0783cd4ad2c14dca5dac\": container with ID starting with 71417bac8be6dd1bd83c75cdd5d64019cb1f2874f48b0783cd4ad2c14dca5dac not found: ID does not exist" containerID="71417bac8be6dd1bd83c75cdd5d64019cb1f2874f48b0783cd4ad2c14dca5dac" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.863358 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71417bac8be6dd1bd83c75cdd5d64019cb1f2874f48b0783cd4ad2c14dca5dac"} err="failed to get container status \"71417bac8be6dd1bd83c75cdd5d64019cb1f2874f48b0783cd4ad2c14dca5dac\": rpc error: code = NotFound desc = could not find container \"71417bac8be6dd1bd83c75cdd5d64019cb1f2874f48b0783cd4ad2c14dca5dac\": container with ID starting with 71417bac8be6dd1bd83c75cdd5d64019cb1f2874f48b0783cd4ad2c14dca5dac not found: ID does not exist" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.863403 4706 scope.go:117] "RemoveContainer" containerID="8b44ec2b9af0bdc1b5316f4f241191e690b60ff3a3ed6c648d8266e4393c53e5" Nov 25 11:59:01 crc kubenswrapper[4706]: E1125 11:59:01.863771 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b44ec2b9af0bdc1b5316f4f241191e690b60ff3a3ed6c648d8266e4393c53e5\": container with ID starting with 8b44ec2b9af0bdc1b5316f4f241191e690b60ff3a3ed6c648d8266e4393c53e5 not found: ID does not exist" containerID="8b44ec2b9af0bdc1b5316f4f241191e690b60ff3a3ed6c648d8266e4393c53e5" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.863806 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b44ec2b9af0bdc1b5316f4f241191e690b60ff3a3ed6c648d8266e4393c53e5"} err="failed to get container status \"8b44ec2b9af0bdc1b5316f4f241191e690b60ff3a3ed6c648d8266e4393c53e5\": rpc error: code = NotFound desc = could not find container \"8b44ec2b9af0bdc1b5316f4f241191e690b60ff3a3ed6c648d8266e4393c53e5\": container with ID starting with 8b44ec2b9af0bdc1b5316f4f241191e690b60ff3a3ed6c648d8266e4393c53e5 not found: ID does not exist" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.863830 4706 scope.go:117] "RemoveContainer" containerID="04b034f142918d3e1808fd476452126891202190a6b2b20c9b61ca96cfb6b9bb" Nov 25 11:59:01 crc kubenswrapper[4706]: E1125 11:59:01.864129 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04b034f142918d3e1808fd476452126891202190a6b2b20c9b61ca96cfb6b9bb\": container with ID starting with 04b034f142918d3e1808fd476452126891202190a6b2b20c9b61ca96cfb6b9bb not found: ID does not exist" containerID="04b034f142918d3e1808fd476452126891202190a6b2b20c9b61ca96cfb6b9bb" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.864160 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04b034f142918d3e1808fd476452126891202190a6b2b20c9b61ca96cfb6b9bb"} err="failed to get container status \"04b034f142918d3e1808fd476452126891202190a6b2b20c9b61ca96cfb6b9bb\": rpc error: code = NotFound desc = could not find container \"04b034f142918d3e1808fd476452126891202190a6b2b20c9b61ca96cfb6b9bb\": container with ID starting with 04b034f142918d3e1808fd476452126891202190a6b2b20c9b61ca96cfb6b9bb not found: ID does not exist" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.934813 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21b25aa3-3ad7-4def-a817-6b7191924b4f" path="/var/lib/kubelet/pods/21b25aa3-3ad7-4def-a817-6b7191924b4f/volumes" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.998978 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/340a9043-f74e-40cb-aeea-bbcabe4d865f-scripts\") pod \"ceilometer-0\" (UID: \"340a9043-f74e-40cb-aeea-bbcabe4d865f\") " pod="openstack/ceilometer-0" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.999054 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/340a9043-f74e-40cb-aeea-bbcabe4d865f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"340a9043-f74e-40cb-aeea-bbcabe4d865f\") " pod="openstack/ceilometer-0" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.999095 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/340a9043-f74e-40cb-aeea-bbcabe4d865f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"340a9043-f74e-40cb-aeea-bbcabe4d865f\") " pod="openstack/ceilometer-0" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.999123 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfgfl\" (UniqueName: \"kubernetes.io/projected/340a9043-f74e-40cb-aeea-bbcabe4d865f-kube-api-access-vfgfl\") pod \"ceilometer-0\" (UID: \"340a9043-f74e-40cb-aeea-bbcabe4d865f\") " pod="openstack/ceilometer-0" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.999141 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/340a9043-f74e-40cb-aeea-bbcabe4d865f-config-data\") pod \"ceilometer-0\" (UID: \"340a9043-f74e-40cb-aeea-bbcabe4d865f\") " pod="openstack/ceilometer-0" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.999163 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/340a9043-f74e-40cb-aeea-bbcabe4d865f-run-httpd\") pod \"ceilometer-0\" (UID: \"340a9043-f74e-40cb-aeea-bbcabe4d865f\") " pod="openstack/ceilometer-0" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.999269 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/340a9043-f74e-40cb-aeea-bbcabe4d865f-log-httpd\") pod \"ceilometer-0\" (UID: \"340a9043-f74e-40cb-aeea-bbcabe4d865f\") " pod="openstack/ceilometer-0" Nov 25 11:59:01 crc kubenswrapper[4706]: I1125 11:59:01.999328 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/340a9043-f74e-40cb-aeea-bbcabe4d865f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"340a9043-f74e-40cb-aeea-bbcabe4d865f\") " pod="openstack/ceilometer-0" Nov 25 11:59:02 crc kubenswrapper[4706]: I1125 11:59:02.101274 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/340a9043-f74e-40cb-aeea-bbcabe4d865f-log-httpd\") pod \"ceilometer-0\" (UID: \"340a9043-f74e-40cb-aeea-bbcabe4d865f\") " pod="openstack/ceilometer-0" Nov 25 11:59:02 crc kubenswrapper[4706]: I1125 11:59:02.101388 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/340a9043-f74e-40cb-aeea-bbcabe4d865f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"340a9043-f74e-40cb-aeea-bbcabe4d865f\") " pod="openstack/ceilometer-0" Nov 25 11:59:02 crc kubenswrapper[4706]: I1125 11:59:02.101442 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/340a9043-f74e-40cb-aeea-bbcabe4d865f-scripts\") pod \"ceilometer-0\" (UID: \"340a9043-f74e-40cb-aeea-bbcabe4d865f\") " pod="openstack/ceilometer-0" Nov 25 11:59:02 crc kubenswrapper[4706]: I1125 11:59:02.101490 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/340a9043-f74e-40cb-aeea-bbcabe4d865f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"340a9043-f74e-40cb-aeea-bbcabe4d865f\") " pod="openstack/ceilometer-0" Nov 25 11:59:02 crc kubenswrapper[4706]: I1125 11:59:02.101548 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/340a9043-f74e-40cb-aeea-bbcabe4d865f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"340a9043-f74e-40cb-aeea-bbcabe4d865f\") " pod="openstack/ceilometer-0" Nov 25 11:59:02 crc kubenswrapper[4706]: I1125 11:59:02.101581 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfgfl\" (UniqueName: \"kubernetes.io/projected/340a9043-f74e-40cb-aeea-bbcabe4d865f-kube-api-access-vfgfl\") pod \"ceilometer-0\" (UID: \"340a9043-f74e-40cb-aeea-bbcabe4d865f\") " pod="openstack/ceilometer-0" Nov 25 11:59:02 crc kubenswrapper[4706]: I1125 11:59:02.101611 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/340a9043-f74e-40cb-aeea-bbcabe4d865f-config-data\") pod \"ceilometer-0\" (UID: \"340a9043-f74e-40cb-aeea-bbcabe4d865f\") " pod="openstack/ceilometer-0" Nov 25 11:59:02 crc kubenswrapper[4706]: I1125 11:59:02.101633 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/340a9043-f74e-40cb-aeea-bbcabe4d865f-run-httpd\") pod \"ceilometer-0\" (UID: \"340a9043-f74e-40cb-aeea-bbcabe4d865f\") " pod="openstack/ceilometer-0" Nov 25 11:59:02 crc kubenswrapper[4706]: I1125 11:59:02.101720 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/340a9043-f74e-40cb-aeea-bbcabe4d865f-log-httpd\") pod \"ceilometer-0\" (UID: \"340a9043-f74e-40cb-aeea-bbcabe4d865f\") " pod="openstack/ceilometer-0" Nov 25 11:59:02 crc kubenswrapper[4706]: I1125 11:59:02.102756 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/340a9043-f74e-40cb-aeea-bbcabe4d865f-run-httpd\") pod \"ceilometer-0\" (UID: \"340a9043-f74e-40cb-aeea-bbcabe4d865f\") " pod="openstack/ceilometer-0" Nov 25 11:59:02 crc kubenswrapper[4706]: I1125 11:59:02.106015 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/340a9043-f74e-40cb-aeea-bbcabe4d865f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"340a9043-f74e-40cb-aeea-bbcabe4d865f\") " pod="openstack/ceilometer-0" Nov 25 11:59:02 crc kubenswrapper[4706]: I1125 11:59:02.106275 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/340a9043-f74e-40cb-aeea-bbcabe4d865f-scripts\") pod \"ceilometer-0\" (UID: \"340a9043-f74e-40cb-aeea-bbcabe4d865f\") " pod="openstack/ceilometer-0" Nov 25 11:59:02 crc kubenswrapper[4706]: I1125 11:59:02.106610 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/340a9043-f74e-40cb-aeea-bbcabe4d865f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"340a9043-f74e-40cb-aeea-bbcabe4d865f\") " pod="openstack/ceilometer-0" Nov 25 11:59:02 crc kubenswrapper[4706]: I1125 11:59:02.107418 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/340a9043-f74e-40cb-aeea-bbcabe4d865f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"340a9043-f74e-40cb-aeea-bbcabe4d865f\") " pod="openstack/ceilometer-0" Nov 25 11:59:02 crc kubenswrapper[4706]: I1125 11:59:02.108328 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/340a9043-f74e-40cb-aeea-bbcabe4d865f-config-data\") pod \"ceilometer-0\" (UID: \"340a9043-f74e-40cb-aeea-bbcabe4d865f\") " pod="openstack/ceilometer-0" Nov 25 11:59:02 crc kubenswrapper[4706]: I1125 11:59:02.116881 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfgfl\" (UniqueName: \"kubernetes.io/projected/340a9043-f74e-40cb-aeea-bbcabe4d865f-kube-api-access-vfgfl\") pod \"ceilometer-0\" (UID: \"340a9043-f74e-40cb-aeea-bbcabe4d865f\") " pod="openstack/ceilometer-0" Nov 25 11:59:02 crc kubenswrapper[4706]: I1125 11:59:02.123996 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 11:59:02 crc kubenswrapper[4706]: I1125 11:59:02.581585 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 11:59:02 crc kubenswrapper[4706]: I1125 11:59:02.742867 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"340a9043-f74e-40cb-aeea-bbcabe4d865f","Type":"ContainerStarted","Data":"c972a2b1d8e4cca3c3806abe0adcbadc82268e0f8d411764ae56c9877b07da47"} Nov 25 11:59:03 crc kubenswrapper[4706]: I1125 11:59:03.753482 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"340a9043-f74e-40cb-aeea-bbcabe4d865f","Type":"ContainerStarted","Data":"6e08e81898da6a7930df1ce338a0e9c3701cd37740dc3907bd8efbbbc380c50c"} Nov 25 11:59:04 crc kubenswrapper[4706]: I1125 11:59:04.768515 4706 generic.go:334] "Generic (PLEG): container finished" podID="0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0" containerID="a341f1a73ca72b1d393cb86f7600862f027f84cc6c5a74fcd9888210c58daa4e" exitCode=0 Nov 25 11:59:04 crc kubenswrapper[4706]: I1125 11:59:04.768899 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8vfzt" event={"ID":"0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0","Type":"ContainerDied","Data":"a341f1a73ca72b1d393cb86f7600862f027f84cc6c5a74fcd9888210c58daa4e"} Nov 25 11:59:04 crc kubenswrapper[4706]: I1125 11:59:04.774113 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"340a9043-f74e-40cb-aeea-bbcabe4d865f","Type":"ContainerStarted","Data":"7babfcf4bb025d1018392577d8a61a4b0d76b428aee9c12727994cf7e8bd7a03"} Nov 25 11:59:05 crc kubenswrapper[4706]: I1125 11:59:05.346512 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 11:59:05 crc kubenswrapper[4706]: I1125 11:59:05.346828 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 11:59:05 crc kubenswrapper[4706]: I1125 11:59:05.786973 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"340a9043-f74e-40cb-aeea-bbcabe4d865f","Type":"ContainerStarted","Data":"a00974f6bfdae740ae7d9d27a8475cf185454c016f398a1d7d807d5abcfe20a2"} Nov 25 11:59:06 crc kubenswrapper[4706]: I1125 11:59:06.263384 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8vfzt" Nov 25 11:59:06 crc kubenswrapper[4706]: I1125 11:59:06.313017 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0-combined-ca-bundle\") pod \"0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0\" (UID: \"0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0\") " Nov 25 11:59:06 crc kubenswrapper[4706]: I1125 11:59:06.313320 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0-scripts\") pod \"0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0\" (UID: \"0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0\") " Nov 25 11:59:06 crc kubenswrapper[4706]: I1125 11:59:06.313373 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0-config-data\") pod \"0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0\" (UID: \"0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0\") " Nov 25 11:59:06 crc kubenswrapper[4706]: I1125 11:59:06.313403 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkl9s\" (UniqueName: \"kubernetes.io/projected/0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0-kube-api-access-fkl9s\") pod \"0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0\" (UID: \"0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0\") " Nov 25 11:59:06 crc kubenswrapper[4706]: I1125 11:59:06.327620 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0-kube-api-access-fkl9s" (OuterVolumeSpecName: "kube-api-access-fkl9s") pod "0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0" (UID: "0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0"). InnerVolumeSpecName "kube-api-access-fkl9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:59:06 crc kubenswrapper[4706]: I1125 11:59:06.338525 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0-scripts" (OuterVolumeSpecName: "scripts") pod "0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0" (UID: "0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:59:06 crc kubenswrapper[4706]: I1125 11:59:06.350858 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0-config-data" (OuterVolumeSpecName: "config-data") pod "0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0" (UID: "0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:59:06 crc kubenswrapper[4706]: I1125 11:59:06.371086 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="25fe01f6-353a-43f9-a857-cd776a10c417" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.201:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 11:59:06 crc kubenswrapper[4706]: I1125 11:59:06.371462 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="25fe01f6-353a-43f9-a857-cd776a10c417" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.201:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 11:59:06 crc kubenswrapper[4706]: I1125 11:59:06.375654 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0" (UID: "0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:59:06 crc kubenswrapper[4706]: I1125 11:59:06.425544 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:06 crc kubenswrapper[4706]: I1125 11:59:06.425587 4706 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:06 crc kubenswrapper[4706]: I1125 11:59:06.425601 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:06 crc kubenswrapper[4706]: I1125 11:59:06.425614 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkl9s\" (UniqueName: \"kubernetes.io/projected/0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0-kube-api-access-fkl9s\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:06 crc kubenswrapper[4706]: I1125 11:59:06.804919 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8vfzt" event={"ID":"0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0","Type":"ContainerDied","Data":"5c91a476a9ddcb0cf4b5857dc88ea202d3c072140ed3d3b30bdfb7a7b12b1606"} Nov 25 11:59:06 crc kubenswrapper[4706]: I1125 11:59:06.805232 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c91a476a9ddcb0cf4b5857dc88ea202d3c072140ed3d3b30bdfb7a7b12b1606" Nov 25 11:59:06 crc kubenswrapper[4706]: I1125 11:59:06.805090 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8vfzt" Nov 25 11:59:06 crc kubenswrapper[4706]: I1125 11:59:06.973618 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 11:59:06 crc kubenswrapper[4706]: I1125 11:59:06.973867 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="36357458-7aac-49fa-a118-5208a484df3d" containerName="nova-scheduler-scheduler" containerID="cri-o://31478ca1a61cba5f2518fb62a72364d9502dd4ae830a575e2b25aee1cd2d8a43" gracePeriod=30 Nov 25 11:59:06 crc kubenswrapper[4706]: I1125 11:59:06.985340 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 11:59:06 crc kubenswrapper[4706]: I1125 11:59:06.985630 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="25fe01f6-353a-43f9-a857-cd776a10c417" containerName="nova-api-log" containerID="cri-o://2f28afb8863e4a5e84b6db9225949afa29f5ac5df130ac31c7d8dbbd16ed47c9" gracePeriod=30 Nov 25 11:59:06 crc kubenswrapper[4706]: I1125 11:59:06.985782 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="25fe01f6-353a-43f9-a857-cd776a10c417" containerName="nova-api-api" containerID="cri-o://fdc4b6fd5d469f0949eb27b2e0ade41d05df6ef1ff13a5a4b1f5d19e96217f51" gracePeriod=30 Nov 25 11:59:07 crc kubenswrapper[4706]: I1125 11:59:07.016631 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 11:59:07 crc kubenswrapper[4706]: I1125 11:59:07.017127 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ab5ba648-4cd1-4304-9470-e10ea703d56d" containerName="nova-metadata-log" containerID="cri-o://981d8cccc856fff1da7933bb683dbbe98131d72f363f703346716b8cc851fab0" gracePeriod=30 Nov 25 11:59:07 crc kubenswrapper[4706]: I1125 11:59:07.017764 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ab5ba648-4cd1-4304-9470-e10ea703d56d" containerName="nova-metadata-metadata" containerID="cri-o://2d9dbeb66fdecd423ec896129d3be8705b4645c81b637763068ba0d500828586" gracePeriod=30 Nov 25 11:59:07 crc kubenswrapper[4706]: E1125 11:59:07.025485 4706 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="31478ca1a61cba5f2518fb62a72364d9502dd4ae830a575e2b25aee1cd2d8a43" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 11:59:07 crc kubenswrapper[4706]: E1125 11:59:07.026449 4706 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="31478ca1a61cba5f2518fb62a72364d9502dd4ae830a575e2b25aee1cd2d8a43" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 11:59:07 crc kubenswrapper[4706]: E1125 11:59:07.039168 4706 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="31478ca1a61cba5f2518fb62a72364d9502dd4ae830a575e2b25aee1cd2d8a43" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 11:59:07 crc kubenswrapper[4706]: E1125 11:59:07.039465 4706 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="36357458-7aac-49fa-a118-5208a484df3d" containerName="nova-scheduler-scheduler" Nov 25 11:59:07 crc kubenswrapper[4706]: I1125 11:59:07.816978 4706 generic.go:334] "Generic (PLEG): container finished" podID="ab5ba648-4cd1-4304-9470-e10ea703d56d" containerID="981d8cccc856fff1da7933bb683dbbe98131d72f363f703346716b8cc851fab0" exitCode=143 Nov 25 11:59:07 crc kubenswrapper[4706]: I1125 11:59:07.817061 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab5ba648-4cd1-4304-9470-e10ea703d56d","Type":"ContainerDied","Data":"981d8cccc856fff1da7933bb683dbbe98131d72f363f703346716b8cc851fab0"} Nov 25 11:59:07 crc kubenswrapper[4706]: I1125 11:59:07.820500 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"340a9043-f74e-40cb-aeea-bbcabe4d865f","Type":"ContainerStarted","Data":"33dbfd8d1543c7fcbd20e98cea2dd30096c92f51173eb9670aa61cdf94ccffac"} Nov 25 11:59:07 crc kubenswrapper[4706]: I1125 11:59:07.820897 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 11:59:07 crc kubenswrapper[4706]: I1125 11:59:07.839522 4706 generic.go:334] "Generic (PLEG): container finished" podID="25fe01f6-353a-43f9-a857-cd776a10c417" containerID="2f28afb8863e4a5e84b6db9225949afa29f5ac5df130ac31c7d8dbbd16ed47c9" exitCode=143 Nov 25 11:59:07 crc kubenswrapper[4706]: I1125 11:59:07.839690 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"25fe01f6-353a-43f9-a857-cd776a10c417","Type":"ContainerDied","Data":"2f28afb8863e4a5e84b6db9225949afa29f5ac5df130ac31c7d8dbbd16ed47c9"} Nov 25 11:59:07 crc kubenswrapper[4706]: I1125 11:59:07.846374 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.853692664 podStartE2EDuration="6.846358269s" podCreationTimestamp="2025-11-25 11:59:01 +0000 UTC" firstStartedPulling="2025-11-25 11:59:02.586684209 +0000 UTC m=+1351.501241590" lastFinishedPulling="2025-11-25 11:59:06.579349814 +0000 UTC m=+1355.493907195" observedRunningTime="2025-11-25 11:59:07.844102582 +0000 UTC m=+1356.758659963" watchObservedRunningTime="2025-11-25 11:59:07.846358269 +0000 UTC m=+1356.760915650" Nov 25 11:59:10 crc kubenswrapper[4706]: I1125 11:59:10.737101 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 11:59:10 crc kubenswrapper[4706]: I1125 11:59:10.813327 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab5ba648-4cd1-4304-9470-e10ea703d56d-combined-ca-bundle\") pod \"ab5ba648-4cd1-4304-9470-e10ea703d56d\" (UID: \"ab5ba648-4cd1-4304-9470-e10ea703d56d\") " Nov 25 11:59:10 crc kubenswrapper[4706]: I1125 11:59:10.813401 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82jws\" (UniqueName: \"kubernetes.io/projected/ab5ba648-4cd1-4304-9470-e10ea703d56d-kube-api-access-82jws\") pod \"ab5ba648-4cd1-4304-9470-e10ea703d56d\" (UID: \"ab5ba648-4cd1-4304-9470-e10ea703d56d\") " Nov 25 11:59:10 crc kubenswrapper[4706]: I1125 11:59:10.813436 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab5ba648-4cd1-4304-9470-e10ea703d56d-logs\") pod \"ab5ba648-4cd1-4304-9470-e10ea703d56d\" (UID: \"ab5ba648-4cd1-4304-9470-e10ea703d56d\") " Nov 25 11:59:10 crc kubenswrapper[4706]: I1125 11:59:10.813589 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab5ba648-4cd1-4304-9470-e10ea703d56d-config-data\") pod \"ab5ba648-4cd1-4304-9470-e10ea703d56d\" (UID: \"ab5ba648-4cd1-4304-9470-e10ea703d56d\") " Nov 25 11:59:10 crc kubenswrapper[4706]: I1125 11:59:10.813730 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab5ba648-4cd1-4304-9470-e10ea703d56d-nova-metadata-tls-certs\") pod \"ab5ba648-4cd1-4304-9470-e10ea703d56d\" (UID: \"ab5ba648-4cd1-4304-9470-e10ea703d56d\") " Nov 25 11:59:10 crc kubenswrapper[4706]: I1125 11:59:10.814632 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab5ba648-4cd1-4304-9470-e10ea703d56d-logs" (OuterVolumeSpecName: "logs") pod "ab5ba648-4cd1-4304-9470-e10ea703d56d" (UID: "ab5ba648-4cd1-4304-9470-e10ea703d56d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:59:10 crc kubenswrapper[4706]: I1125 11:59:10.818980 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab5ba648-4cd1-4304-9470-e10ea703d56d-kube-api-access-82jws" (OuterVolumeSpecName: "kube-api-access-82jws") pod "ab5ba648-4cd1-4304-9470-e10ea703d56d" (UID: "ab5ba648-4cd1-4304-9470-e10ea703d56d"). InnerVolumeSpecName "kube-api-access-82jws". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:59:10 crc kubenswrapper[4706]: I1125 11:59:10.877828 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab5ba648-4cd1-4304-9470-e10ea703d56d-config-data" (OuterVolumeSpecName: "config-data") pod "ab5ba648-4cd1-4304-9470-e10ea703d56d" (UID: "ab5ba648-4cd1-4304-9470-e10ea703d56d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:59:10 crc kubenswrapper[4706]: I1125 11:59:10.880140 4706 generic.go:334] "Generic (PLEG): container finished" podID="ab5ba648-4cd1-4304-9470-e10ea703d56d" containerID="2d9dbeb66fdecd423ec896129d3be8705b4645c81b637763068ba0d500828586" exitCode=0 Nov 25 11:59:10 crc kubenswrapper[4706]: I1125 11:59:10.880187 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab5ba648-4cd1-4304-9470-e10ea703d56d","Type":"ContainerDied","Data":"2d9dbeb66fdecd423ec896129d3be8705b4645c81b637763068ba0d500828586"} Nov 25 11:59:10 crc kubenswrapper[4706]: I1125 11:59:10.880213 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab5ba648-4cd1-4304-9470-e10ea703d56d","Type":"ContainerDied","Data":"1df6673ccc1706ccff1218e8430a218c9e04c69d5c815562f157b9e2d8d10f33"} Nov 25 11:59:10 crc kubenswrapper[4706]: I1125 11:59:10.880232 4706 scope.go:117] "RemoveContainer" containerID="2d9dbeb66fdecd423ec896129d3be8705b4645c81b637763068ba0d500828586" Nov 25 11:59:10 crc kubenswrapper[4706]: I1125 11:59:10.880389 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 11:59:10 crc kubenswrapper[4706]: I1125 11:59:10.899691 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab5ba648-4cd1-4304-9470-e10ea703d56d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab5ba648-4cd1-4304-9470-e10ea703d56d" (UID: "ab5ba648-4cd1-4304-9470-e10ea703d56d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:59:10 crc kubenswrapper[4706]: I1125 11:59:10.900472 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab5ba648-4cd1-4304-9470-e10ea703d56d-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ab5ba648-4cd1-4304-9470-e10ea703d56d" (UID: "ab5ba648-4cd1-4304-9470-e10ea703d56d"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:59:10 crc kubenswrapper[4706]: I1125 11:59:10.915645 4706 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab5ba648-4cd1-4304-9470-e10ea703d56d-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:10 crc kubenswrapper[4706]: I1125 11:59:10.915670 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab5ba648-4cd1-4304-9470-e10ea703d56d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:10 crc kubenswrapper[4706]: I1125 11:59:10.915679 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82jws\" (UniqueName: \"kubernetes.io/projected/ab5ba648-4cd1-4304-9470-e10ea703d56d-kube-api-access-82jws\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:10 crc kubenswrapper[4706]: I1125 11:59:10.915689 4706 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab5ba648-4cd1-4304-9470-e10ea703d56d-logs\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:10 crc kubenswrapper[4706]: I1125 11:59:10.915700 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab5ba648-4cd1-4304-9470-e10ea703d56d-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:10 crc kubenswrapper[4706]: I1125 11:59:10.921548 4706 scope.go:117] "RemoveContainer" containerID="981d8cccc856fff1da7933bb683dbbe98131d72f363f703346716b8cc851fab0" Nov 25 11:59:10 crc kubenswrapper[4706]: I1125 11:59:10.941289 4706 scope.go:117] "RemoveContainer" containerID="2d9dbeb66fdecd423ec896129d3be8705b4645c81b637763068ba0d500828586" Nov 25 11:59:10 crc kubenswrapper[4706]: E1125 11:59:10.942100 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d9dbeb66fdecd423ec896129d3be8705b4645c81b637763068ba0d500828586\": container with ID starting with 2d9dbeb66fdecd423ec896129d3be8705b4645c81b637763068ba0d500828586 not found: ID does not exist" containerID="2d9dbeb66fdecd423ec896129d3be8705b4645c81b637763068ba0d500828586" Nov 25 11:59:10 crc kubenswrapper[4706]: I1125 11:59:10.942778 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d9dbeb66fdecd423ec896129d3be8705b4645c81b637763068ba0d500828586"} err="failed to get container status \"2d9dbeb66fdecd423ec896129d3be8705b4645c81b637763068ba0d500828586\": rpc error: code = NotFound desc = could not find container \"2d9dbeb66fdecd423ec896129d3be8705b4645c81b637763068ba0d500828586\": container with ID starting with 2d9dbeb66fdecd423ec896129d3be8705b4645c81b637763068ba0d500828586 not found: ID does not exist" Nov 25 11:59:10 crc kubenswrapper[4706]: I1125 11:59:10.942867 4706 scope.go:117] "RemoveContainer" containerID="981d8cccc856fff1da7933bb683dbbe98131d72f363f703346716b8cc851fab0" Nov 25 11:59:10 crc kubenswrapper[4706]: E1125 11:59:10.943290 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"981d8cccc856fff1da7933bb683dbbe98131d72f363f703346716b8cc851fab0\": container with ID starting with 981d8cccc856fff1da7933bb683dbbe98131d72f363f703346716b8cc851fab0 not found: ID does not exist" containerID="981d8cccc856fff1da7933bb683dbbe98131d72f363f703346716b8cc851fab0" Nov 25 11:59:10 crc kubenswrapper[4706]: I1125 11:59:10.943382 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"981d8cccc856fff1da7933bb683dbbe98131d72f363f703346716b8cc851fab0"} err="failed to get container status \"981d8cccc856fff1da7933bb683dbbe98131d72f363f703346716b8cc851fab0\": rpc error: code = NotFound desc = could not find container \"981d8cccc856fff1da7933bb683dbbe98131d72f363f703346716b8cc851fab0\": container with ID starting with 981d8cccc856fff1da7933bb683dbbe98131d72f363f703346716b8cc851fab0 not found: ID does not exist" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.243348 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.272992 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.286421 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 11:59:11 crc kubenswrapper[4706]: E1125 11:59:11.286973 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0" containerName="nova-manage" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.287003 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0" containerName="nova-manage" Nov 25 11:59:11 crc kubenswrapper[4706]: E1125 11:59:11.287032 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab5ba648-4cd1-4304-9470-e10ea703d56d" containerName="nova-metadata-metadata" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.287040 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab5ba648-4cd1-4304-9470-e10ea703d56d" containerName="nova-metadata-metadata" Nov 25 11:59:11 crc kubenswrapper[4706]: E1125 11:59:11.287128 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab5ba648-4cd1-4304-9470-e10ea703d56d" containerName="nova-metadata-log" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.287141 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab5ba648-4cd1-4304-9470-e10ea703d56d" containerName="nova-metadata-log" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.287696 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab5ba648-4cd1-4304-9470-e10ea703d56d" containerName="nova-metadata-metadata" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.287725 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0" containerName="nova-manage" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.287745 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab5ba648-4cd1-4304-9470-e10ea703d56d" containerName="nova-metadata-log" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.289396 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.294357 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.294576 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.301804 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.344702 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4169a8fb-29dd-4d0a-851f-58055dcfff18-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4169a8fb-29dd-4d0a-851f-58055dcfff18\") " pod="openstack/nova-metadata-0" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.344762 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4169a8fb-29dd-4d0a-851f-58055dcfff18-logs\") pod \"nova-metadata-0\" (UID: \"4169a8fb-29dd-4d0a-851f-58055dcfff18\") " pod="openstack/nova-metadata-0" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.344951 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4169a8fb-29dd-4d0a-851f-58055dcfff18-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4169a8fb-29dd-4d0a-851f-58055dcfff18\") " pod="openstack/nova-metadata-0" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.345006 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4169a8fb-29dd-4d0a-851f-58055dcfff18-config-data\") pod \"nova-metadata-0\" (UID: \"4169a8fb-29dd-4d0a-851f-58055dcfff18\") " pod="openstack/nova-metadata-0" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.345106 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9qbf\" (UniqueName: \"kubernetes.io/projected/4169a8fb-29dd-4d0a-851f-58055dcfff18-kube-api-access-m9qbf\") pod \"nova-metadata-0\" (UID: \"4169a8fb-29dd-4d0a-851f-58055dcfff18\") " pod="openstack/nova-metadata-0" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.426095 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.447384 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4169a8fb-29dd-4d0a-851f-58055dcfff18-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4169a8fb-29dd-4d0a-851f-58055dcfff18\") " pod="openstack/nova-metadata-0" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.447447 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4169a8fb-29dd-4d0a-851f-58055dcfff18-config-data\") pod \"nova-metadata-0\" (UID: \"4169a8fb-29dd-4d0a-851f-58055dcfff18\") " pod="openstack/nova-metadata-0" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.447502 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9qbf\" (UniqueName: \"kubernetes.io/projected/4169a8fb-29dd-4d0a-851f-58055dcfff18-kube-api-access-m9qbf\") pod \"nova-metadata-0\" (UID: \"4169a8fb-29dd-4d0a-851f-58055dcfff18\") " pod="openstack/nova-metadata-0" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.447627 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4169a8fb-29dd-4d0a-851f-58055dcfff18-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4169a8fb-29dd-4d0a-851f-58055dcfff18\") " pod="openstack/nova-metadata-0" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.447668 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4169a8fb-29dd-4d0a-851f-58055dcfff18-logs\") pod \"nova-metadata-0\" (UID: \"4169a8fb-29dd-4d0a-851f-58055dcfff18\") " pod="openstack/nova-metadata-0" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.448224 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4169a8fb-29dd-4d0a-851f-58055dcfff18-logs\") pod \"nova-metadata-0\" (UID: \"4169a8fb-29dd-4d0a-851f-58055dcfff18\") " pod="openstack/nova-metadata-0" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.465394 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4169a8fb-29dd-4d0a-851f-58055dcfff18-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4169a8fb-29dd-4d0a-851f-58055dcfff18\") " pod="openstack/nova-metadata-0" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.467515 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4169a8fb-29dd-4d0a-851f-58055dcfff18-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4169a8fb-29dd-4d0a-851f-58055dcfff18\") " pod="openstack/nova-metadata-0" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.469968 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9qbf\" (UniqueName: \"kubernetes.io/projected/4169a8fb-29dd-4d0a-851f-58055dcfff18-kube-api-access-m9qbf\") pod \"nova-metadata-0\" (UID: \"4169a8fb-29dd-4d0a-851f-58055dcfff18\") " pod="openstack/nova-metadata-0" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.480451 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4169a8fb-29dd-4d0a-851f-58055dcfff18-config-data\") pod \"nova-metadata-0\" (UID: \"4169a8fb-29dd-4d0a-851f-58055dcfff18\") " pod="openstack/nova-metadata-0" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.548939 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btxs9\" (UniqueName: \"kubernetes.io/projected/36357458-7aac-49fa-a118-5208a484df3d-kube-api-access-btxs9\") pod \"36357458-7aac-49fa-a118-5208a484df3d\" (UID: \"36357458-7aac-49fa-a118-5208a484df3d\") " Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.549045 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36357458-7aac-49fa-a118-5208a484df3d-combined-ca-bundle\") pod \"36357458-7aac-49fa-a118-5208a484df3d\" (UID: \"36357458-7aac-49fa-a118-5208a484df3d\") " Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.549069 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36357458-7aac-49fa-a118-5208a484df3d-config-data\") pod \"36357458-7aac-49fa-a118-5208a484df3d\" (UID: \"36357458-7aac-49fa-a118-5208a484df3d\") " Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.555553 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36357458-7aac-49fa-a118-5208a484df3d-kube-api-access-btxs9" (OuterVolumeSpecName: "kube-api-access-btxs9") pod "36357458-7aac-49fa-a118-5208a484df3d" (UID: "36357458-7aac-49fa-a118-5208a484df3d"). InnerVolumeSpecName "kube-api-access-btxs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.575075 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36357458-7aac-49fa-a118-5208a484df3d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36357458-7aac-49fa-a118-5208a484df3d" (UID: "36357458-7aac-49fa-a118-5208a484df3d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.587393 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36357458-7aac-49fa-a118-5208a484df3d-config-data" (OuterVolumeSpecName: "config-data") pod "36357458-7aac-49fa-a118-5208a484df3d" (UID: "36357458-7aac-49fa-a118-5208a484df3d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.651331 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btxs9\" (UniqueName: \"kubernetes.io/projected/36357458-7aac-49fa-a118-5208a484df3d-kube-api-access-btxs9\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.651366 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36357458-7aac-49fa-a118-5208a484df3d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.651376 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36357458-7aac-49fa-a118-5208a484df3d-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.715813 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.889876 4706 generic.go:334] "Generic (PLEG): container finished" podID="36357458-7aac-49fa-a118-5208a484df3d" containerID="31478ca1a61cba5f2518fb62a72364d9502dd4ae830a575e2b25aee1cd2d8a43" exitCode=0 Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.890179 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"36357458-7aac-49fa-a118-5208a484df3d","Type":"ContainerDied","Data":"31478ca1a61cba5f2518fb62a72364d9502dd4ae830a575e2b25aee1cd2d8a43"} Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.890213 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"36357458-7aac-49fa-a118-5208a484df3d","Type":"ContainerDied","Data":"624f26fbfed55adfade49fca430ca002458ee5268e96c9b60398f9da6196a70f"} Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.890235 4706 scope.go:117] "RemoveContainer" containerID="31478ca1a61cba5f2518fb62a72364d9502dd4ae830a575e2b25aee1cd2d8a43" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.890272 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.914929 4706 scope.go:117] "RemoveContainer" containerID="31478ca1a61cba5f2518fb62a72364d9502dd4ae830a575e2b25aee1cd2d8a43" Nov 25 11:59:11 crc kubenswrapper[4706]: E1125 11:59:11.915748 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31478ca1a61cba5f2518fb62a72364d9502dd4ae830a575e2b25aee1cd2d8a43\": container with ID starting with 31478ca1a61cba5f2518fb62a72364d9502dd4ae830a575e2b25aee1cd2d8a43 not found: ID does not exist" containerID="31478ca1a61cba5f2518fb62a72364d9502dd4ae830a575e2b25aee1cd2d8a43" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.915795 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31478ca1a61cba5f2518fb62a72364d9502dd4ae830a575e2b25aee1cd2d8a43"} err="failed to get container status \"31478ca1a61cba5f2518fb62a72364d9502dd4ae830a575e2b25aee1cd2d8a43\": rpc error: code = NotFound desc = could not find container \"31478ca1a61cba5f2518fb62a72364d9502dd4ae830a575e2b25aee1cd2d8a43\": container with ID starting with 31478ca1a61cba5f2518fb62a72364d9502dd4ae830a575e2b25aee1cd2d8a43 not found: ID does not exist" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.939293 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab5ba648-4cd1-4304-9470-e10ea703d56d" path="/var/lib/kubelet/pods/ab5ba648-4cd1-4304-9470-e10ea703d56d/volumes" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.940231 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.944973 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.958845 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 11:59:11 crc kubenswrapper[4706]: E1125 11:59:11.959393 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36357458-7aac-49fa-a118-5208a484df3d" containerName="nova-scheduler-scheduler" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.959414 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="36357458-7aac-49fa-a118-5208a484df3d" containerName="nova-scheduler-scheduler" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.959681 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="36357458-7aac-49fa-a118-5208a484df3d" containerName="nova-scheduler-scheduler" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.960550 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.963374 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 11:59:11 crc kubenswrapper[4706]: I1125 11:59:11.973181 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.060749 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2ggf\" (UniqueName: \"kubernetes.io/projected/dea70033-299d-4ca8-9249-c909449f24c9-kube-api-access-r2ggf\") pod \"nova-scheduler-0\" (UID: \"dea70033-299d-4ca8-9249-c909449f24c9\") " pod="openstack/nova-scheduler-0" Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.060806 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dea70033-299d-4ca8-9249-c909449f24c9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dea70033-299d-4ca8-9249-c909449f24c9\") " pod="openstack/nova-scheduler-0" Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.060932 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dea70033-299d-4ca8-9249-c909449f24c9-config-data\") pod \"nova-scheduler-0\" (UID: \"dea70033-299d-4ca8-9249-c909449f24c9\") " pod="openstack/nova-scheduler-0" Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.162684 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2ggf\" (UniqueName: \"kubernetes.io/projected/dea70033-299d-4ca8-9249-c909449f24c9-kube-api-access-r2ggf\") pod \"nova-scheduler-0\" (UID: \"dea70033-299d-4ca8-9249-c909449f24c9\") " pod="openstack/nova-scheduler-0" Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.162770 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dea70033-299d-4ca8-9249-c909449f24c9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dea70033-299d-4ca8-9249-c909449f24c9\") " pod="openstack/nova-scheduler-0" Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.162915 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dea70033-299d-4ca8-9249-c909449f24c9-config-data\") pod \"nova-scheduler-0\" (UID: \"dea70033-299d-4ca8-9249-c909449f24c9\") " pod="openstack/nova-scheduler-0" Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.168593 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dea70033-299d-4ca8-9249-c909449f24c9-config-data\") pod \"nova-scheduler-0\" (UID: \"dea70033-299d-4ca8-9249-c909449f24c9\") " pod="openstack/nova-scheduler-0" Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.168774 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dea70033-299d-4ca8-9249-c909449f24c9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dea70033-299d-4ca8-9249-c909449f24c9\") " pod="openstack/nova-scheduler-0" Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.179066 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2ggf\" (UniqueName: \"kubernetes.io/projected/dea70033-299d-4ca8-9249-c909449f24c9-kube-api-access-r2ggf\") pod \"nova-scheduler-0\" (UID: \"dea70033-299d-4ca8-9249-c909449f24c9\") " pod="openstack/nova-scheduler-0" Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.203234 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.290470 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 11:59:12 crc kubenswrapper[4706]: W1125 11:59:12.783866 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddea70033_299d_4ca8_9249_c909449f24c9.slice/crio-80fd99e5fa9957696f49eea9133ed48f78aeb09671d0287cabb73751c6fac937 WatchSource:0}: Error finding container 80fd99e5fa9957696f49eea9133ed48f78aeb09671d0287cabb73751c6fac937: Status 404 returned error can't find the container with id 80fd99e5fa9957696f49eea9133ed48f78aeb09671d0287cabb73751c6fac937 Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.784071 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.877094 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.907998 4706 generic.go:334] "Generic (PLEG): container finished" podID="25fe01f6-353a-43f9-a857-cd776a10c417" containerID="fdc4b6fd5d469f0949eb27b2e0ade41d05df6ef1ff13a5a4b1f5d19e96217f51" exitCode=0 Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.908067 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"25fe01f6-353a-43f9-a857-cd776a10c417","Type":"ContainerDied","Data":"fdc4b6fd5d469f0949eb27b2e0ade41d05df6ef1ff13a5a4b1f5d19e96217f51"} Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.908097 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"25fe01f6-353a-43f9-a857-cd776a10c417","Type":"ContainerDied","Data":"24abbca777f8c927c39901e065d28ffe36a97cfa98e32e39ce19b82cd9b09452"} Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.908118 4706 scope.go:117] "RemoveContainer" containerID="fdc4b6fd5d469f0949eb27b2e0ade41d05df6ef1ff13a5a4b1f5d19e96217f51" Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.908238 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.920595 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4169a8fb-29dd-4d0a-851f-58055dcfff18","Type":"ContainerStarted","Data":"ab02cc3984e498bdbfcae3d72a2f15b8e9c484ca74e3f45552c56cfa828bc523"} Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.920647 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4169a8fb-29dd-4d0a-851f-58055dcfff18","Type":"ContainerStarted","Data":"0e706d2cbd80f53747034650ab984989a48ab14c54b99775a09ef849399335c6"} Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.920664 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4169a8fb-29dd-4d0a-851f-58055dcfff18","Type":"ContainerStarted","Data":"132bbae980784d4f401d994911f88ca028283d1a5478ed607535145f5e9856a2"} Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.927607 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dea70033-299d-4ca8-9249-c909449f24c9","Type":"ContainerStarted","Data":"80fd99e5fa9957696f49eea9133ed48f78aeb09671d0287cabb73751c6fac937"} Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.939464 4706 scope.go:117] "RemoveContainer" containerID="2f28afb8863e4a5e84b6db9225949afa29f5ac5df130ac31c7d8dbbd16ed47c9" Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.947597 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.947574233 podStartE2EDuration="1.947574233s" podCreationTimestamp="2025-11-25 11:59:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:59:12.93909619 +0000 UTC m=+1361.853653571" watchObservedRunningTime="2025-11-25 11:59:12.947574233 +0000 UTC m=+1361.862131614" Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.968711 4706 scope.go:117] "RemoveContainer" containerID="fdc4b6fd5d469f0949eb27b2e0ade41d05df6ef1ff13a5a4b1f5d19e96217f51" Nov 25 11:59:12 crc kubenswrapper[4706]: E1125 11:59:12.969197 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdc4b6fd5d469f0949eb27b2e0ade41d05df6ef1ff13a5a4b1f5d19e96217f51\": container with ID starting with fdc4b6fd5d469f0949eb27b2e0ade41d05df6ef1ff13a5a4b1f5d19e96217f51 not found: ID does not exist" containerID="fdc4b6fd5d469f0949eb27b2e0ade41d05df6ef1ff13a5a4b1f5d19e96217f51" Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.969266 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdc4b6fd5d469f0949eb27b2e0ade41d05df6ef1ff13a5a4b1f5d19e96217f51"} err="failed to get container status \"fdc4b6fd5d469f0949eb27b2e0ade41d05df6ef1ff13a5a4b1f5d19e96217f51\": rpc error: code = NotFound desc = could not find container \"fdc4b6fd5d469f0949eb27b2e0ade41d05df6ef1ff13a5a4b1f5d19e96217f51\": container with ID starting with fdc4b6fd5d469f0949eb27b2e0ade41d05df6ef1ff13a5a4b1f5d19e96217f51 not found: ID does not exist" Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.969329 4706 scope.go:117] "RemoveContainer" containerID="2f28afb8863e4a5e84b6db9225949afa29f5ac5df130ac31c7d8dbbd16ed47c9" Nov 25 11:59:12 crc kubenswrapper[4706]: E1125 11:59:12.969626 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f28afb8863e4a5e84b6db9225949afa29f5ac5df130ac31c7d8dbbd16ed47c9\": container with ID starting with 2f28afb8863e4a5e84b6db9225949afa29f5ac5df130ac31c7d8dbbd16ed47c9 not found: ID does not exist" containerID="2f28afb8863e4a5e84b6db9225949afa29f5ac5df130ac31c7d8dbbd16ed47c9" Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.969724 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f28afb8863e4a5e84b6db9225949afa29f5ac5df130ac31c7d8dbbd16ed47c9"} err="failed to get container status \"2f28afb8863e4a5e84b6db9225949afa29f5ac5df130ac31c7d8dbbd16ed47c9\": rpc error: code = NotFound desc = could not find container \"2f28afb8863e4a5e84b6db9225949afa29f5ac5df130ac31c7d8dbbd16ed47c9\": container with ID starting with 2f28afb8863e4a5e84b6db9225949afa29f5ac5df130ac31c7d8dbbd16ed47c9 not found: ID does not exist" Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.980372 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25fe01f6-353a-43f9-a857-cd776a10c417-combined-ca-bundle\") pod \"25fe01f6-353a-43f9-a857-cd776a10c417\" (UID: \"25fe01f6-353a-43f9-a857-cd776a10c417\") " Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.980568 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25fe01f6-353a-43f9-a857-cd776a10c417-public-tls-certs\") pod \"25fe01f6-353a-43f9-a857-cd776a10c417\" (UID: \"25fe01f6-353a-43f9-a857-cd776a10c417\") " Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.980791 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25fe01f6-353a-43f9-a857-cd776a10c417-config-data\") pod \"25fe01f6-353a-43f9-a857-cd776a10c417\" (UID: \"25fe01f6-353a-43f9-a857-cd776a10c417\") " Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.980909 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25fe01f6-353a-43f9-a857-cd776a10c417-logs\") pod \"25fe01f6-353a-43f9-a857-cd776a10c417\" (UID: \"25fe01f6-353a-43f9-a857-cd776a10c417\") " Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.981069 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25fe01f6-353a-43f9-a857-cd776a10c417-internal-tls-certs\") pod \"25fe01f6-353a-43f9-a857-cd776a10c417\" (UID: \"25fe01f6-353a-43f9-a857-cd776a10c417\") " Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.981174 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqcjq\" (UniqueName: \"kubernetes.io/projected/25fe01f6-353a-43f9-a857-cd776a10c417-kube-api-access-tqcjq\") pod \"25fe01f6-353a-43f9-a857-cd776a10c417\" (UID: \"25fe01f6-353a-43f9-a857-cd776a10c417\") " Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.981820 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25fe01f6-353a-43f9-a857-cd776a10c417-logs" (OuterVolumeSpecName: "logs") pod "25fe01f6-353a-43f9-a857-cd776a10c417" (UID: "25fe01f6-353a-43f9-a857-cd776a10c417"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.982060 4706 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25fe01f6-353a-43f9-a857-cd776a10c417-logs\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:12 crc kubenswrapper[4706]: I1125 11:59:12.985182 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25fe01f6-353a-43f9-a857-cd776a10c417-kube-api-access-tqcjq" (OuterVolumeSpecName: "kube-api-access-tqcjq") pod "25fe01f6-353a-43f9-a857-cd776a10c417" (UID: "25fe01f6-353a-43f9-a857-cd776a10c417"). InnerVolumeSpecName "kube-api-access-tqcjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.012925 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25fe01f6-353a-43f9-a857-cd776a10c417-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25fe01f6-353a-43f9-a857-cd776a10c417" (UID: "25fe01f6-353a-43f9-a857-cd776a10c417"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.013506 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25fe01f6-353a-43f9-a857-cd776a10c417-config-data" (OuterVolumeSpecName: "config-data") pod "25fe01f6-353a-43f9-a857-cd776a10c417" (UID: "25fe01f6-353a-43f9-a857-cd776a10c417"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.036334 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25fe01f6-353a-43f9-a857-cd776a10c417-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "25fe01f6-353a-43f9-a857-cd776a10c417" (UID: "25fe01f6-353a-43f9-a857-cd776a10c417"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.038482 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25fe01f6-353a-43f9-a857-cd776a10c417-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "25fe01f6-353a-43f9-a857-cd776a10c417" (UID: "25fe01f6-353a-43f9-a857-cd776a10c417"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.084358 4706 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25fe01f6-353a-43f9-a857-cd776a10c417-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.084392 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqcjq\" (UniqueName: \"kubernetes.io/projected/25fe01f6-353a-43f9-a857-cd776a10c417-kube-api-access-tqcjq\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.084404 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25fe01f6-353a-43f9-a857-cd776a10c417-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.084411 4706 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25fe01f6-353a-43f9-a857-cd776a10c417-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.084422 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25fe01f6-353a-43f9-a857-cd776a10c417-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.354146 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.367741 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.384581 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 11:59:13 crc kubenswrapper[4706]: E1125 11:59:13.384964 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25fe01f6-353a-43f9-a857-cd776a10c417" containerName="nova-api-api" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.384979 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="25fe01f6-353a-43f9-a857-cd776a10c417" containerName="nova-api-api" Nov 25 11:59:13 crc kubenswrapper[4706]: E1125 11:59:13.384990 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25fe01f6-353a-43f9-a857-cd776a10c417" containerName="nova-api-log" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.384996 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="25fe01f6-353a-43f9-a857-cd776a10c417" containerName="nova-api-log" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.385151 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="25fe01f6-353a-43f9-a857-cd776a10c417" containerName="nova-api-log" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.385177 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="25fe01f6-353a-43f9-a857-cd776a10c417" containerName="nova-api-api" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.386095 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.389451 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.389648 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.392667 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.394782 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.495991 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0608285b-d97c-42b6-abc5-32cff6509d9e-public-tls-certs\") pod \"nova-api-0\" (UID: \"0608285b-d97c-42b6-abc5-32cff6509d9e\") " pod="openstack/nova-api-0" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.496064 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0608285b-d97c-42b6-abc5-32cff6509d9e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0608285b-d97c-42b6-abc5-32cff6509d9e\") " pod="openstack/nova-api-0" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.496102 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0608285b-d97c-42b6-abc5-32cff6509d9e-config-data\") pod \"nova-api-0\" (UID: \"0608285b-d97c-42b6-abc5-32cff6509d9e\") " pod="openstack/nova-api-0" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.496139 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8fm4\" (UniqueName: \"kubernetes.io/projected/0608285b-d97c-42b6-abc5-32cff6509d9e-kube-api-access-s8fm4\") pod \"nova-api-0\" (UID: \"0608285b-d97c-42b6-abc5-32cff6509d9e\") " pod="openstack/nova-api-0" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.496182 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0608285b-d97c-42b6-abc5-32cff6509d9e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0608285b-d97c-42b6-abc5-32cff6509d9e\") " pod="openstack/nova-api-0" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.496222 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0608285b-d97c-42b6-abc5-32cff6509d9e-logs\") pod \"nova-api-0\" (UID: \"0608285b-d97c-42b6-abc5-32cff6509d9e\") " pod="openstack/nova-api-0" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.598964 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0608285b-d97c-42b6-abc5-32cff6509d9e-public-tls-certs\") pod \"nova-api-0\" (UID: \"0608285b-d97c-42b6-abc5-32cff6509d9e\") " pod="openstack/nova-api-0" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.599050 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0608285b-d97c-42b6-abc5-32cff6509d9e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0608285b-d97c-42b6-abc5-32cff6509d9e\") " pod="openstack/nova-api-0" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.599084 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0608285b-d97c-42b6-abc5-32cff6509d9e-config-data\") pod \"nova-api-0\" (UID: \"0608285b-d97c-42b6-abc5-32cff6509d9e\") " pod="openstack/nova-api-0" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.599116 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8fm4\" (UniqueName: \"kubernetes.io/projected/0608285b-d97c-42b6-abc5-32cff6509d9e-kube-api-access-s8fm4\") pod \"nova-api-0\" (UID: \"0608285b-d97c-42b6-abc5-32cff6509d9e\") " pod="openstack/nova-api-0" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.599136 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0608285b-d97c-42b6-abc5-32cff6509d9e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0608285b-d97c-42b6-abc5-32cff6509d9e\") " pod="openstack/nova-api-0" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.599168 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0608285b-d97c-42b6-abc5-32cff6509d9e-logs\") pod \"nova-api-0\" (UID: \"0608285b-d97c-42b6-abc5-32cff6509d9e\") " pod="openstack/nova-api-0" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.599747 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0608285b-d97c-42b6-abc5-32cff6509d9e-logs\") pod \"nova-api-0\" (UID: \"0608285b-d97c-42b6-abc5-32cff6509d9e\") " pod="openstack/nova-api-0" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.604390 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0608285b-d97c-42b6-abc5-32cff6509d9e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0608285b-d97c-42b6-abc5-32cff6509d9e\") " pod="openstack/nova-api-0" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.604467 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0608285b-d97c-42b6-abc5-32cff6509d9e-public-tls-certs\") pod \"nova-api-0\" (UID: \"0608285b-d97c-42b6-abc5-32cff6509d9e\") " pod="openstack/nova-api-0" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.604540 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0608285b-d97c-42b6-abc5-32cff6509d9e-config-data\") pod \"nova-api-0\" (UID: \"0608285b-d97c-42b6-abc5-32cff6509d9e\") " pod="openstack/nova-api-0" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.615128 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0608285b-d97c-42b6-abc5-32cff6509d9e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0608285b-d97c-42b6-abc5-32cff6509d9e\") " pod="openstack/nova-api-0" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.615221 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8fm4\" (UniqueName: \"kubernetes.io/projected/0608285b-d97c-42b6-abc5-32cff6509d9e-kube-api-access-s8fm4\") pod \"nova-api-0\" (UID: \"0608285b-d97c-42b6-abc5-32cff6509d9e\") " pod="openstack/nova-api-0" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.712054 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.934498 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25fe01f6-353a-43f9-a857-cd776a10c417" path="/var/lib/kubelet/pods/25fe01f6-353a-43f9-a857-cd776a10c417/volumes" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.935438 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36357458-7aac-49fa-a118-5208a484df3d" path="/var/lib/kubelet/pods/36357458-7aac-49fa-a118-5208a484df3d/volumes" Nov 25 11:59:13 crc kubenswrapper[4706]: I1125 11:59:13.997903 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dea70033-299d-4ca8-9249-c909449f24c9","Type":"ContainerStarted","Data":"0bebd681753fd4057e7ffbfd45cc6dbc3f4d75148505846dfbde4b7ec29f6d50"} Nov 25 11:59:14 crc kubenswrapper[4706]: I1125 11:59:14.037291 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.037269709 podStartE2EDuration="3.037269709s" podCreationTimestamp="2025-11-25 11:59:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:59:14.032624012 +0000 UTC m=+1362.947181413" watchObservedRunningTime="2025-11-25 11:59:14.037269709 +0000 UTC m=+1362.951827110" Nov 25 11:59:14 crc kubenswrapper[4706]: I1125 11:59:14.180474 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 11:59:15 crc kubenswrapper[4706]: I1125 11:59:15.011929 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0608285b-d97c-42b6-abc5-32cff6509d9e","Type":"ContainerStarted","Data":"aaf1fa801b2bd94ef7d0dc1c45f5c6c15cbc1bc59a675d079fd3255d0c02e782"} Nov 25 11:59:15 crc kubenswrapper[4706]: I1125 11:59:15.012508 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0608285b-d97c-42b6-abc5-32cff6509d9e","Type":"ContainerStarted","Data":"bf45430c5fc2ad1f6a4edde9732dfc0334982213af4309721eaacb308719dd58"} Nov 25 11:59:15 crc kubenswrapper[4706]: I1125 11:59:15.012545 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0608285b-d97c-42b6-abc5-32cff6509d9e","Type":"ContainerStarted","Data":"fba3288e1fe25853d6688c28a8122a47b8fbce0422a71a6b35e627bcd3e4cad4"} Nov 25 11:59:15 crc kubenswrapper[4706]: I1125 11:59:15.037085 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.037061314 podStartE2EDuration="2.037061314s" podCreationTimestamp="2025-11-25 11:59:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:59:15.030474188 +0000 UTC m=+1363.945031569" watchObservedRunningTime="2025-11-25 11:59:15.037061314 +0000 UTC m=+1363.951618695" Nov 25 11:59:16 crc kubenswrapper[4706]: I1125 11:59:16.716007 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 11:59:16 crc kubenswrapper[4706]: I1125 11:59:16.716391 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 11:59:17 crc kubenswrapper[4706]: I1125 11:59:17.291157 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 25 11:59:21 crc kubenswrapper[4706]: I1125 11:59:21.716905 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 11:59:21 crc kubenswrapper[4706]: I1125 11:59:21.717553 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 11:59:22 crc kubenswrapper[4706]: I1125 11:59:22.291496 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 25 11:59:22 crc kubenswrapper[4706]: I1125 11:59:22.317690 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 25 11:59:22 crc kubenswrapper[4706]: I1125 11:59:22.725638 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4169a8fb-29dd-4d0a-851f-58055dcfff18" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 11:59:22 crc kubenswrapper[4706]: I1125 11:59:22.725660 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4169a8fb-29dd-4d0a-851f-58055dcfff18" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 11:59:23 crc kubenswrapper[4706]: I1125 11:59:23.111671 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 25 11:59:23 crc kubenswrapper[4706]: I1125 11:59:23.712319 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 11:59:23 crc kubenswrapper[4706]: I1125 11:59:23.712373 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 11:59:24 crc kubenswrapper[4706]: I1125 11:59:24.723456 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0608285b-d97c-42b6-abc5-32cff6509d9e" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.206:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 11:59:24 crc kubenswrapper[4706]: I1125 11:59:24.723503 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0608285b-d97c-42b6-abc5-32cff6509d9e" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.206:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 11:59:29 crc kubenswrapper[4706]: I1125 11:59:29.431923 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 25 11:59:29 crc kubenswrapper[4706]: I1125 11:59:29.433746 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 11:59:29 crc kubenswrapper[4706]: I1125 11:59:29.437896 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 25 11:59:29 crc kubenswrapper[4706]: I1125 11:59:29.437903 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 25 11:59:29 crc kubenswrapper[4706]: I1125 11:59:29.443319 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 25 11:59:29 crc kubenswrapper[4706]: I1125 11:59:29.539452 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bcd59d0-34b6-44a9-8bf7-7f8c8cfb9036-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4bcd59d0-34b6-44a9-8bf7-7f8c8cfb9036\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 11:59:29 crc kubenswrapper[4706]: I1125 11:59:29.539812 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bcd59d0-34b6-44a9-8bf7-7f8c8cfb9036-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4bcd59d0-34b6-44a9-8bf7-7f8c8cfb9036\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 11:59:29 crc kubenswrapper[4706]: I1125 11:59:29.641681 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bcd59d0-34b6-44a9-8bf7-7f8c8cfb9036-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4bcd59d0-34b6-44a9-8bf7-7f8c8cfb9036\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 11:59:29 crc kubenswrapper[4706]: I1125 11:59:29.641805 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bcd59d0-34b6-44a9-8bf7-7f8c8cfb9036-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4bcd59d0-34b6-44a9-8bf7-7f8c8cfb9036\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 11:59:29 crc kubenswrapper[4706]: I1125 11:59:29.642248 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bcd59d0-34b6-44a9-8bf7-7f8c8cfb9036-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4bcd59d0-34b6-44a9-8bf7-7f8c8cfb9036\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 11:59:29 crc kubenswrapper[4706]: I1125 11:59:29.659795 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bcd59d0-34b6-44a9-8bf7-7f8c8cfb9036-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4bcd59d0-34b6-44a9-8bf7-7f8c8cfb9036\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 11:59:29 crc kubenswrapper[4706]: I1125 11:59:29.768279 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 11:59:30 crc kubenswrapper[4706]: I1125 11:59:30.217823 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 25 11:59:31 crc kubenswrapper[4706]: I1125 11:59:31.125971 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 11:59:31 crc kubenswrapper[4706]: I1125 11:59:31.126417 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 11:59:31 crc kubenswrapper[4706]: I1125 11:59:31.126488 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" Nov 25 11:59:31 crc kubenswrapper[4706]: I1125 11:59:31.127718 4706 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f685f0473c39af27d83f9b8acef23bb16392c6964cab02224e6cb60acc8e8ad1"} pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 11:59:31 crc kubenswrapper[4706]: I1125 11:59:31.127843 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" containerID="cri-o://f685f0473c39af27d83f9b8acef23bb16392c6964cab02224e6cb60acc8e8ad1" gracePeriod=600 Nov 25 11:59:31 crc kubenswrapper[4706]: I1125 11:59:31.160440 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4bcd59d0-34b6-44a9-8bf7-7f8c8cfb9036","Type":"ContainerStarted","Data":"0b57cf1e0f6b5bfa662bc5fc47b17ecfd1e1264efd7fd32bdbd7e0bd5c5d6c4e"} Nov 25 11:59:31 crc kubenswrapper[4706]: I1125 11:59:31.160491 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4bcd59d0-34b6-44a9-8bf7-7f8c8cfb9036","Type":"ContainerStarted","Data":"adb0a5a4923a4bb5324afadee1cfd851bd8055102f0c4de7c5572057b3d95931"} Nov 25 11:59:31 crc kubenswrapper[4706]: I1125 11:59:31.178893 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.178874785 podStartE2EDuration="2.178874785s" podCreationTimestamp="2025-11-25 11:59:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:59:31.173391237 +0000 UTC m=+1380.087948638" watchObservedRunningTime="2025-11-25 11:59:31.178874785 +0000 UTC m=+1380.093432166" Nov 25 11:59:31 crc kubenswrapper[4706]: I1125 11:59:31.722510 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 11:59:31 crc kubenswrapper[4706]: I1125 11:59:31.724764 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 11:59:31 crc kubenswrapper[4706]: I1125 11:59:31.731456 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 11:59:32 crc kubenswrapper[4706]: I1125 11:59:32.172064 4706 generic.go:334] "Generic (PLEG): container finished" podID="0930887a-320c-4506-8c9c-f94d6d64516a" containerID="f685f0473c39af27d83f9b8acef23bb16392c6964cab02224e6cb60acc8e8ad1" exitCode=0 Nov 25 11:59:32 crc kubenswrapper[4706]: I1125 11:59:32.172189 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" event={"ID":"0930887a-320c-4506-8c9c-f94d6d64516a","Type":"ContainerDied","Data":"f685f0473c39af27d83f9b8acef23bb16392c6964cab02224e6cb60acc8e8ad1"} Nov 25 11:59:32 crc kubenswrapper[4706]: I1125 11:59:32.172691 4706 scope.go:117] "RemoveContainer" containerID="11a32543eabb96f028f5772afd04ba615397c2a8e9b4fc94ea299c44af45edfc" Nov 25 11:59:32 crc kubenswrapper[4706]: I1125 11:59:32.178372 4706 generic.go:334] "Generic (PLEG): container finished" podID="4bcd59d0-34b6-44a9-8bf7-7f8c8cfb9036" containerID="0b57cf1e0f6b5bfa662bc5fc47b17ecfd1e1264efd7fd32bdbd7e0bd5c5d6c4e" exitCode=0 Nov 25 11:59:32 crc kubenswrapper[4706]: I1125 11:59:32.179096 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4bcd59d0-34b6-44a9-8bf7-7f8c8cfb9036","Type":"ContainerDied","Data":"0b57cf1e0f6b5bfa662bc5fc47b17ecfd1e1264efd7fd32bdbd7e0bd5c5d6c4e"} Nov 25 11:59:32 crc kubenswrapper[4706]: I1125 11:59:32.190317 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 25 11:59:32 crc kubenswrapper[4706]: I1125 11:59:32.340204 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 11:59:33 crc kubenswrapper[4706]: I1125 11:59:33.203849 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" event={"ID":"0930887a-320c-4506-8c9c-f94d6d64516a","Type":"ContainerStarted","Data":"0a0bdee99cfe03b615e21edca20e8cd5d2aec43e4e69d2e5c17d3666e93d6426"} Nov 25 11:59:33 crc kubenswrapper[4706]: I1125 11:59:33.548102 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 11:59:33 crc kubenswrapper[4706]: I1125 11:59:33.674238 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bcd59d0-34b6-44a9-8bf7-7f8c8cfb9036-kubelet-dir\") pod \"4bcd59d0-34b6-44a9-8bf7-7f8c8cfb9036\" (UID: \"4bcd59d0-34b6-44a9-8bf7-7f8c8cfb9036\") " Nov 25 11:59:33 crc kubenswrapper[4706]: I1125 11:59:33.674394 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bcd59d0-34b6-44a9-8bf7-7f8c8cfb9036-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4bcd59d0-34b6-44a9-8bf7-7f8c8cfb9036" (UID: "4bcd59d0-34b6-44a9-8bf7-7f8c8cfb9036"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 11:59:33 crc kubenswrapper[4706]: I1125 11:59:33.674447 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bcd59d0-34b6-44a9-8bf7-7f8c8cfb9036-kube-api-access\") pod \"4bcd59d0-34b6-44a9-8bf7-7f8c8cfb9036\" (UID: \"4bcd59d0-34b6-44a9-8bf7-7f8c8cfb9036\") " Nov 25 11:59:33 crc kubenswrapper[4706]: I1125 11:59:33.674835 4706 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bcd59d0-34b6-44a9-8bf7-7f8c8cfb9036-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:33 crc kubenswrapper[4706]: I1125 11:59:33.687528 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bcd59d0-34b6-44a9-8bf7-7f8c8cfb9036-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4bcd59d0-34b6-44a9-8bf7-7f8c8cfb9036" (UID: "4bcd59d0-34b6-44a9-8bf7-7f8c8cfb9036"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:59:33 crc kubenswrapper[4706]: I1125 11:59:33.720745 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 11:59:33 crc kubenswrapper[4706]: I1125 11:59:33.721477 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 11:59:33 crc kubenswrapper[4706]: I1125 11:59:33.721515 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 11:59:33 crc kubenswrapper[4706]: I1125 11:59:33.734066 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 11:59:33 crc kubenswrapper[4706]: I1125 11:59:33.776667 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bcd59d0-34b6-44a9-8bf7-7f8c8cfb9036-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:34 crc kubenswrapper[4706]: I1125 11:59:34.216406 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4bcd59d0-34b6-44a9-8bf7-7f8c8cfb9036","Type":"ContainerDied","Data":"adb0a5a4923a4bb5324afadee1cfd851bd8055102f0c4de7c5572057b3d95931"} Nov 25 11:59:34 crc kubenswrapper[4706]: I1125 11:59:34.216672 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adb0a5a4923a4bb5324afadee1cfd851bd8055102f0c4de7c5572057b3d95931" Nov 25 11:59:34 crc kubenswrapper[4706]: I1125 11:59:34.216693 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 11:59:34 crc kubenswrapper[4706]: I1125 11:59:34.217118 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 11:59:34 crc kubenswrapper[4706]: I1125 11:59:34.227293 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 11:59:36 crc kubenswrapper[4706]: I1125 11:59:36.630065 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 25 11:59:36 crc kubenswrapper[4706]: E1125 11:59:36.630998 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bcd59d0-34b6-44a9-8bf7-7f8c8cfb9036" containerName="pruner" Nov 25 11:59:36 crc kubenswrapper[4706]: I1125 11:59:36.631013 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bcd59d0-34b6-44a9-8bf7-7f8c8cfb9036" containerName="pruner" Nov 25 11:59:36 crc kubenswrapper[4706]: I1125 11:59:36.631194 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bcd59d0-34b6-44a9-8bf7-7f8c8cfb9036" containerName="pruner" Nov 25 11:59:36 crc kubenswrapper[4706]: I1125 11:59:36.631878 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 11:59:36 crc kubenswrapper[4706]: I1125 11:59:36.634805 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 25 11:59:36 crc kubenswrapper[4706]: I1125 11:59:36.636389 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 25 11:59:36 crc kubenswrapper[4706]: I1125 11:59:36.659664 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 25 11:59:36 crc kubenswrapper[4706]: I1125 11:59:36.728083 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c2b01a11-ff6e-4718-9622-3cba2728d492-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c2b01a11-ff6e-4718-9622-3cba2728d492\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 11:59:36 crc kubenswrapper[4706]: I1125 11:59:36.728149 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c2b01a11-ff6e-4718-9622-3cba2728d492-var-lock\") pod \"installer-9-crc\" (UID: \"c2b01a11-ff6e-4718-9622-3cba2728d492\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 11:59:36 crc kubenswrapper[4706]: I1125 11:59:36.728463 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2b01a11-ff6e-4718-9622-3cba2728d492-kube-api-access\") pod \"installer-9-crc\" (UID: \"c2b01a11-ff6e-4718-9622-3cba2728d492\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 11:59:36 crc kubenswrapper[4706]: I1125 11:59:36.829964 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2b01a11-ff6e-4718-9622-3cba2728d492-kube-api-access\") pod \"installer-9-crc\" (UID: \"c2b01a11-ff6e-4718-9622-3cba2728d492\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 11:59:36 crc kubenswrapper[4706]: I1125 11:59:36.830395 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c2b01a11-ff6e-4718-9622-3cba2728d492-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c2b01a11-ff6e-4718-9622-3cba2728d492\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 11:59:36 crc kubenswrapper[4706]: I1125 11:59:36.830446 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c2b01a11-ff6e-4718-9622-3cba2728d492-var-lock\") pod \"installer-9-crc\" (UID: \"c2b01a11-ff6e-4718-9622-3cba2728d492\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 11:59:36 crc kubenswrapper[4706]: I1125 11:59:36.830588 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c2b01a11-ff6e-4718-9622-3cba2728d492-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c2b01a11-ff6e-4718-9622-3cba2728d492\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 11:59:36 crc kubenswrapper[4706]: I1125 11:59:36.830622 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c2b01a11-ff6e-4718-9622-3cba2728d492-var-lock\") pod \"installer-9-crc\" (UID: \"c2b01a11-ff6e-4718-9622-3cba2728d492\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 11:59:36 crc kubenswrapper[4706]: I1125 11:59:36.855277 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2b01a11-ff6e-4718-9622-3cba2728d492-kube-api-access\") pod \"installer-9-crc\" (UID: \"c2b01a11-ff6e-4718-9622-3cba2728d492\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 11:59:36 crc kubenswrapper[4706]: I1125 11:59:36.957877 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 11:59:37 crc kubenswrapper[4706]: I1125 11:59:37.447336 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 25 11:59:37 crc kubenswrapper[4706]: W1125 11:59:37.456260 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc2b01a11_ff6e_4718_9622_3cba2728d492.slice/crio-7a346c594a4024432a9a567698dda8d50c74aee300428af8a4dd1f47296286b2 WatchSource:0}: Error finding container 7a346c594a4024432a9a567698dda8d50c74aee300428af8a4dd1f47296286b2: Status 404 returned error can't find the container with id 7a346c594a4024432a9a567698dda8d50c74aee300428af8a4dd1f47296286b2 Nov 25 11:59:38 crc kubenswrapper[4706]: I1125 11:59:38.273251 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c2b01a11-ff6e-4718-9622-3cba2728d492","Type":"ContainerStarted","Data":"c81e514baec80ddbe304ab1081e5f9f6819d7d415ae13c82e6b417787d0d852e"} Nov 25 11:59:38 crc kubenswrapper[4706]: I1125 11:59:38.273679 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c2b01a11-ff6e-4718-9622-3cba2728d492","Type":"ContainerStarted","Data":"7a346c594a4024432a9a567698dda8d50c74aee300428af8a4dd1f47296286b2"} Nov 25 11:59:38 crc kubenswrapper[4706]: I1125 11:59:38.303199 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.303158041 podStartE2EDuration="2.303158041s" podCreationTimestamp="2025-11-25 11:59:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 11:59:38.289421585 +0000 UTC m=+1387.203978986" watchObservedRunningTime="2025-11-25 11:59:38.303158041 +0000 UTC m=+1387.217715422" Nov 25 11:59:42 crc kubenswrapper[4706]: I1125 11:59:42.717651 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 11:59:43 crc kubenswrapper[4706]: I1125 11:59:43.557707 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 11:59:47 crc kubenswrapper[4706]: I1125 11:59:47.191564 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="ed6df424-6b86-44a1-8157-ca1f33167065" containerName="rabbitmq" containerID="cri-o://83e7f28c12712a2bc4fe90ff43fdbec3e960bfd4432704e6835a237988fcf7c0" gracePeriod=604796 Nov 25 11:59:47 crc kubenswrapper[4706]: I1125 11:59:47.510536 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="ed6df424-6b86-44a1-8157-ca1f33167065" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.97:5671: connect: connection refused" Nov 25 11:59:47 crc kubenswrapper[4706]: I1125 11:59:47.862441 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="557c84e6-ab5c-40c1-a3e1-68b513874f9b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Nov 25 11:59:47 crc kubenswrapper[4706]: I1125 11:59:47.871246 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="557c84e6-ab5c-40c1-a3e1-68b513874f9b" containerName="rabbitmq" containerID="cri-o://a0ce08dbe233b30e509c7b81643703135a7c2e986bc72e2ff04292a28c7dbbaf" gracePeriod=604796 Nov 25 11:59:53 crc kubenswrapper[4706]: I1125 11:59:53.466076 4706 generic.go:334] "Generic (PLEG): container finished" podID="ed6df424-6b86-44a1-8157-ca1f33167065" containerID="83e7f28c12712a2bc4fe90ff43fdbec3e960bfd4432704e6835a237988fcf7c0" exitCode=0 Nov 25 11:59:53 crc kubenswrapper[4706]: I1125 11:59:53.466159 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ed6df424-6b86-44a1-8157-ca1f33167065","Type":"ContainerDied","Data":"83e7f28c12712a2bc4fe90ff43fdbec3e960bfd4432704e6835a237988fcf7c0"} Nov 25 11:59:53 crc kubenswrapper[4706]: I1125 11:59:53.903277 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 11:59:53 crc kubenswrapper[4706]: I1125 11:59:53.962824 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ed6df424-6b86-44a1-8157-ca1f33167065-rabbitmq-plugins\") pod \"ed6df424-6b86-44a1-8157-ca1f33167065\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " Nov 25 11:59:53 crc kubenswrapper[4706]: I1125 11:59:53.962930 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ed6df424-6b86-44a1-8157-ca1f33167065-rabbitmq-confd\") pod \"ed6df424-6b86-44a1-8157-ca1f33167065\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " Nov 25 11:59:53 crc kubenswrapper[4706]: I1125 11:59:53.963000 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ed6df424-6b86-44a1-8157-ca1f33167065\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " Nov 25 11:59:53 crc kubenswrapper[4706]: I1125 11:59:53.963082 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcjq7\" (UniqueName: \"kubernetes.io/projected/ed6df424-6b86-44a1-8157-ca1f33167065-kube-api-access-pcjq7\") pod \"ed6df424-6b86-44a1-8157-ca1f33167065\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " Nov 25 11:59:53 crc kubenswrapper[4706]: I1125 11:59:53.963123 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ed6df424-6b86-44a1-8157-ca1f33167065-server-conf\") pod \"ed6df424-6b86-44a1-8157-ca1f33167065\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " Nov 25 11:59:53 crc kubenswrapper[4706]: I1125 11:59:53.963160 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed6df424-6b86-44a1-8157-ca1f33167065-config-data\") pod \"ed6df424-6b86-44a1-8157-ca1f33167065\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " Nov 25 11:59:53 crc kubenswrapper[4706]: I1125 11:59:53.963195 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ed6df424-6b86-44a1-8157-ca1f33167065-plugins-conf\") pod \"ed6df424-6b86-44a1-8157-ca1f33167065\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " Nov 25 11:59:53 crc kubenswrapper[4706]: I1125 11:59:53.963283 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ed6df424-6b86-44a1-8157-ca1f33167065-erlang-cookie-secret\") pod \"ed6df424-6b86-44a1-8157-ca1f33167065\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " Nov 25 11:59:53 crc kubenswrapper[4706]: I1125 11:59:53.963389 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ed6df424-6b86-44a1-8157-ca1f33167065-pod-info\") pod \"ed6df424-6b86-44a1-8157-ca1f33167065\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " Nov 25 11:59:53 crc kubenswrapper[4706]: I1125 11:59:53.963417 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ed6df424-6b86-44a1-8157-ca1f33167065-rabbitmq-erlang-cookie\") pod \"ed6df424-6b86-44a1-8157-ca1f33167065\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " Nov 25 11:59:53 crc kubenswrapper[4706]: I1125 11:59:53.963452 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ed6df424-6b86-44a1-8157-ca1f33167065-rabbitmq-tls\") pod \"ed6df424-6b86-44a1-8157-ca1f33167065\" (UID: \"ed6df424-6b86-44a1-8157-ca1f33167065\") " Nov 25 11:59:53 crc kubenswrapper[4706]: I1125 11:59:53.968708 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed6df424-6b86-44a1-8157-ca1f33167065-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "ed6df424-6b86-44a1-8157-ca1f33167065" (UID: "ed6df424-6b86-44a1-8157-ca1f33167065"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:59:53 crc kubenswrapper[4706]: I1125 11:59:53.971420 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed6df424-6b86-44a1-8157-ca1f33167065-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "ed6df424-6b86-44a1-8157-ca1f33167065" (UID: "ed6df424-6b86-44a1-8157-ca1f33167065"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:59:53 crc kubenswrapper[4706]: I1125 11:59:53.972023 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed6df424-6b86-44a1-8157-ca1f33167065-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "ed6df424-6b86-44a1-8157-ca1f33167065" (UID: "ed6df424-6b86-44a1-8157-ca1f33167065"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:59:53 crc kubenswrapper[4706]: I1125 11:59:53.976581 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/ed6df424-6b86-44a1-8157-ca1f33167065-pod-info" (OuterVolumeSpecName: "pod-info") pod "ed6df424-6b86-44a1-8157-ca1f33167065" (UID: "ed6df424-6b86-44a1-8157-ca1f33167065"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 25 11:59:53 crc kubenswrapper[4706]: I1125 11:59:53.977607 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed6df424-6b86-44a1-8157-ca1f33167065-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "ed6df424-6b86-44a1-8157-ca1f33167065" (UID: "ed6df424-6b86-44a1-8157-ca1f33167065"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:59:53 crc kubenswrapper[4706]: I1125 11:59:53.979038 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed6df424-6b86-44a1-8157-ca1f33167065-kube-api-access-pcjq7" (OuterVolumeSpecName: "kube-api-access-pcjq7") pod "ed6df424-6b86-44a1-8157-ca1f33167065" (UID: "ed6df424-6b86-44a1-8157-ca1f33167065"). InnerVolumeSpecName "kube-api-access-pcjq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:59:53 crc kubenswrapper[4706]: I1125 11:59:53.980686 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed6df424-6b86-44a1-8157-ca1f33167065-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "ed6df424-6b86-44a1-8157-ca1f33167065" (UID: "ed6df424-6b86-44a1-8157-ca1f33167065"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:59:53 crc kubenswrapper[4706]: I1125 11:59:53.981663 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "ed6df424-6b86-44a1-8157-ca1f33167065" (UID: "ed6df424-6b86-44a1-8157-ca1f33167065"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.028091 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed6df424-6b86-44a1-8157-ca1f33167065-config-data" (OuterVolumeSpecName: "config-data") pod "ed6df424-6b86-44a1-8157-ca1f33167065" (UID: "ed6df424-6b86-44a1-8157-ca1f33167065"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.060404 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed6df424-6b86-44a1-8157-ca1f33167065-server-conf" (OuterVolumeSpecName: "server-conf") pod "ed6df424-6b86-44a1-8157-ca1f33167065" (UID: "ed6df424-6b86-44a1-8157-ca1f33167065"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.065805 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcjq7\" (UniqueName: \"kubernetes.io/projected/ed6df424-6b86-44a1-8157-ca1f33167065-kube-api-access-pcjq7\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.065844 4706 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ed6df424-6b86-44a1-8157-ca1f33167065-server-conf\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.065857 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed6df424-6b86-44a1-8157-ca1f33167065-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.065866 4706 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ed6df424-6b86-44a1-8157-ca1f33167065-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.065876 4706 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ed6df424-6b86-44a1-8157-ca1f33167065-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.065885 4706 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ed6df424-6b86-44a1-8157-ca1f33167065-pod-info\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.065894 4706 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ed6df424-6b86-44a1-8157-ca1f33167065-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.065902 4706 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ed6df424-6b86-44a1-8157-ca1f33167065-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.065910 4706 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ed6df424-6b86-44a1-8157-ca1f33167065-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.065937 4706 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.076137 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed6df424-6b86-44a1-8157-ca1f33167065-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "ed6df424-6b86-44a1-8157-ca1f33167065" (UID: "ed6df424-6b86-44a1-8157-ca1f33167065"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.129637 4706 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.167750 4706 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ed6df424-6b86-44a1-8157-ca1f33167065-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.167780 4706 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.482292 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ed6df424-6b86-44a1-8157-ca1f33167065","Type":"ContainerDied","Data":"2c62b2da6cecc1094593b01a658bff0960e2926bf47f53eedc829086b96fc4bf"} Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.482379 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.485215 4706 generic.go:334] "Generic (PLEG): container finished" podID="557c84e6-ab5c-40c1-a3e1-68b513874f9b" containerID="a0ce08dbe233b30e509c7b81643703135a7c2e986bc72e2ff04292a28c7dbbaf" exitCode=0 Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.485338 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"557c84e6-ab5c-40c1-a3e1-68b513874f9b","Type":"ContainerDied","Data":"a0ce08dbe233b30e509c7b81643703135a7c2e986bc72e2ff04292a28c7dbbaf"} Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.485501 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"557c84e6-ab5c-40c1-a3e1-68b513874f9b","Type":"ContainerDied","Data":"7fe2413dd3808510c21fe3331bee85b8d76dabd55d8dc71416b890443ce1c08e"} Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.485600 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fe2413dd3808510c21fe3331bee85b8d76dabd55d8dc71416b890443ce1c08e" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.482795 4706 scope.go:117] "RemoveContainer" containerID="83e7f28c12712a2bc4fe90ff43fdbec3e960bfd4432704e6835a237988fcf7c0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.518166 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.528770 4706 scope.go:117] "RemoveContainer" containerID="472e1a1470dd4c66501e097ee3e8181de9d16ed619b7ecc940dc21ed60c2dd09" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.551700 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 11:59:54 crc kubenswrapper[4706]: E1125 11:59:54.571324 4706 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded6df424_6b86_44a1_8157_ca1f33167065.slice/crio-2c62b2da6cecc1094593b01a658bff0960e2926bf47f53eedc829086b96fc4bf\": RecentStats: unable to find data in memory cache]" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.576645 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/557c84e6-ab5c-40c1-a3e1-68b513874f9b-rabbitmq-tls\") pod \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.576892 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/557c84e6-ab5c-40c1-a3e1-68b513874f9b-rabbitmq-plugins\") pod \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.577079 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/557c84e6-ab5c-40c1-a3e1-68b513874f9b-plugins-conf\") pod \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.577178 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/557c84e6-ab5c-40c1-a3e1-68b513874f9b-server-conf\") pod \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.577285 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/557c84e6-ab5c-40c1-a3e1-68b513874f9b-rabbitmq-erlang-cookie\") pod \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.577506 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/557c84e6-ab5c-40c1-a3e1-68b513874f9b-pod-info\") pod \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.577651 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.577765 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhwj9\" (UniqueName: \"kubernetes.io/projected/557c84e6-ab5c-40c1-a3e1-68b513874f9b-kube-api-access-zhwj9\") pod \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.577861 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/557c84e6-ab5c-40c1-a3e1-68b513874f9b-rabbitmq-confd\") pod \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.577951 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/557c84e6-ab5c-40c1-a3e1-68b513874f9b-erlang-cookie-secret\") pod \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.578064 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/557c84e6-ab5c-40c1-a3e1-68b513874f9b-config-data\") pod \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\" (UID: \"557c84e6-ab5c-40c1-a3e1-68b513874f9b\") " Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.587029 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.587801 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/557c84e6-ab5c-40c1-a3e1-68b513874f9b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "557c84e6-ab5c-40c1-a3e1-68b513874f9b" (UID: "557c84e6-ab5c-40c1-a3e1-68b513874f9b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.588126 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/557c84e6-ab5c-40c1-a3e1-68b513874f9b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "557c84e6-ab5c-40c1-a3e1-68b513874f9b" (UID: "557c84e6-ab5c-40c1-a3e1-68b513874f9b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.588817 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/557c84e6-ab5c-40c1-a3e1-68b513874f9b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "557c84e6-ab5c-40c1-a3e1-68b513874f9b" (UID: "557c84e6-ab5c-40c1-a3e1-68b513874f9b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.604832 4706 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/557c84e6-ab5c-40c1-a3e1-68b513874f9b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.604860 4706 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/557c84e6-ab5c-40c1-a3e1-68b513874f9b-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.604871 4706 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/557c84e6-ab5c-40c1-a3e1-68b513874f9b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.604893 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 11:59:54 crc kubenswrapper[4706]: E1125 11:59:54.605512 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed6df424-6b86-44a1-8157-ca1f33167065" containerName="setup-container" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.605527 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed6df424-6b86-44a1-8157-ca1f33167065" containerName="setup-container" Nov 25 11:59:54 crc kubenswrapper[4706]: E1125 11:59:54.605564 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed6df424-6b86-44a1-8157-ca1f33167065" containerName="rabbitmq" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.605572 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed6df424-6b86-44a1-8157-ca1f33167065" containerName="rabbitmq" Nov 25 11:59:54 crc kubenswrapper[4706]: E1125 11:59:54.605588 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="557c84e6-ab5c-40c1-a3e1-68b513874f9b" containerName="rabbitmq" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.605594 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="557c84e6-ab5c-40c1-a3e1-68b513874f9b" containerName="rabbitmq" Nov 25 11:59:54 crc kubenswrapper[4706]: E1125 11:59:54.605626 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="557c84e6-ab5c-40c1-a3e1-68b513874f9b" containerName="setup-container" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.605632 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="557c84e6-ab5c-40c1-a3e1-68b513874f9b" containerName="setup-container" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.605814 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="557c84e6-ab5c-40c1-a3e1-68b513874f9b" containerName="rabbitmq" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.605840 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed6df424-6b86-44a1-8157-ca1f33167065" containerName="rabbitmq" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.606662 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/557c84e6-ab5c-40c1-a3e1-68b513874f9b-kube-api-access-zhwj9" (OuterVolumeSpecName: "kube-api-access-zhwj9") pod "557c84e6-ab5c-40c1-a3e1-68b513874f9b" (UID: "557c84e6-ab5c-40c1-a3e1-68b513874f9b"). InnerVolumeSpecName "kube-api-access-zhwj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.606779 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.610146 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.610382 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-q944t" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.610936 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/557c84e6-ab5c-40c1-a3e1-68b513874f9b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "557c84e6-ab5c-40c1-a3e1-68b513874f9b" (UID: "557c84e6-ab5c-40c1-a3e1-68b513874f9b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.611456 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.614585 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "persistence") pod "557c84e6-ab5c-40c1-a3e1-68b513874f9b" (UID: "557c84e6-ab5c-40c1-a3e1-68b513874f9b"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.620126 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/557c84e6-ab5c-40c1-a3e1-68b513874f9b-pod-info" (OuterVolumeSpecName: "pod-info") pod "557c84e6-ab5c-40c1-a3e1-68b513874f9b" (UID: "557c84e6-ab5c-40c1-a3e1-68b513874f9b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.622170 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.622468 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.622494 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.622530 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.628501 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/557c84e6-ab5c-40c1-a3e1-68b513874f9b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "557c84e6-ab5c-40c1-a3e1-68b513874f9b" (UID: "557c84e6-ab5c-40c1-a3e1-68b513874f9b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.648395 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.650599 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/557c84e6-ab5c-40c1-a3e1-68b513874f9b-config-data" (OuterVolumeSpecName: "config-data") pod "557c84e6-ab5c-40c1-a3e1-68b513874f9b" (UID: "557c84e6-ab5c-40c1-a3e1-68b513874f9b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.686745 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/557c84e6-ab5c-40c1-a3e1-68b513874f9b-server-conf" (OuterVolumeSpecName: "server-conf") pod "557c84e6-ab5c-40c1-a3e1-68b513874f9b" (UID: "557c84e6-ab5c-40c1-a3e1-68b513874f9b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.705913 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a9a6207a-78de-492d-8c88-9a1d2a6f703d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a9a6207a-78de-492d-8c88-9a1d2a6f703d\") " pod="openstack/rabbitmq-server-0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.706192 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"a9a6207a-78de-492d-8c88-9a1d2a6f703d\") " pod="openstack/rabbitmq-server-0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.706322 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a9a6207a-78de-492d-8c88-9a1d2a6f703d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a9a6207a-78de-492d-8c88-9a1d2a6f703d\") " pod="openstack/rabbitmq-server-0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.706434 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a9a6207a-78de-492d-8c88-9a1d2a6f703d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a9a6207a-78de-492d-8c88-9a1d2a6f703d\") " pod="openstack/rabbitmq-server-0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.706568 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a9a6207a-78de-492d-8c88-9a1d2a6f703d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a9a6207a-78de-492d-8c88-9a1d2a6f703d\") " pod="openstack/rabbitmq-server-0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.706691 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a9a6207a-78de-492d-8c88-9a1d2a6f703d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a9a6207a-78de-492d-8c88-9a1d2a6f703d\") " pod="openstack/rabbitmq-server-0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.707996 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a9a6207a-78de-492d-8c88-9a1d2a6f703d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a9a6207a-78de-492d-8c88-9a1d2a6f703d\") " pod="openstack/rabbitmq-server-0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.708174 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a9a6207a-78de-492d-8c88-9a1d2a6f703d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a9a6207a-78de-492d-8c88-9a1d2a6f703d\") " pod="openstack/rabbitmq-server-0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.708956 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9a6207a-78de-492d-8c88-9a1d2a6f703d-config-data\") pod \"rabbitmq-server-0\" (UID: \"a9a6207a-78de-492d-8c88-9a1d2a6f703d\") " pod="openstack/rabbitmq-server-0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.709149 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6gcf\" (UniqueName: \"kubernetes.io/projected/a9a6207a-78de-492d-8c88-9a1d2a6f703d-kube-api-access-t6gcf\") pod \"rabbitmq-server-0\" (UID: \"a9a6207a-78de-492d-8c88-9a1d2a6f703d\") " pod="openstack/rabbitmq-server-0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.709670 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a9a6207a-78de-492d-8c88-9a1d2a6f703d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a9a6207a-78de-492d-8c88-9a1d2a6f703d\") " pod="openstack/rabbitmq-server-0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.709930 4706 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/557c84e6-ab5c-40c1-a3e1-68b513874f9b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.710036 4706 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/557c84e6-ab5c-40c1-a3e1-68b513874f9b-server-conf\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.710120 4706 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/557c84e6-ab5c-40c1-a3e1-68b513874f9b-pod-info\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.710218 4706 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.710357 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhwj9\" (UniqueName: \"kubernetes.io/projected/557c84e6-ab5c-40c1-a3e1-68b513874f9b-kube-api-access-zhwj9\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.710450 4706 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/557c84e6-ab5c-40c1-a3e1-68b513874f9b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.711080 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/557c84e6-ab5c-40c1-a3e1-68b513874f9b-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.736528 4706 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.767536 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/557c84e6-ab5c-40c1-a3e1-68b513874f9b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "557c84e6-ab5c-40c1-a3e1-68b513874f9b" (UID: "557c84e6-ab5c-40c1-a3e1-68b513874f9b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.813192 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a9a6207a-78de-492d-8c88-9a1d2a6f703d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a9a6207a-78de-492d-8c88-9a1d2a6f703d\") " pod="openstack/rabbitmq-server-0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.813751 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a9a6207a-78de-492d-8c88-9a1d2a6f703d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a9a6207a-78de-492d-8c88-9a1d2a6f703d\") " pod="openstack/rabbitmq-server-0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.813918 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a9a6207a-78de-492d-8c88-9a1d2a6f703d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a9a6207a-78de-492d-8c88-9a1d2a6f703d\") " pod="openstack/rabbitmq-server-0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.814075 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9a6207a-78de-492d-8c88-9a1d2a6f703d-config-data\") pod \"rabbitmq-server-0\" (UID: \"a9a6207a-78de-492d-8c88-9a1d2a6f703d\") " pod="openstack/rabbitmq-server-0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.814218 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6gcf\" (UniqueName: \"kubernetes.io/projected/a9a6207a-78de-492d-8c88-9a1d2a6f703d-kube-api-access-t6gcf\") pod \"rabbitmq-server-0\" (UID: \"a9a6207a-78de-492d-8c88-9a1d2a6f703d\") " pod="openstack/rabbitmq-server-0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.814404 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a9a6207a-78de-492d-8c88-9a1d2a6f703d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a9a6207a-78de-492d-8c88-9a1d2a6f703d\") " pod="openstack/rabbitmq-server-0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.814922 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a9a6207a-78de-492d-8c88-9a1d2a6f703d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a9a6207a-78de-492d-8c88-9a1d2a6f703d\") " pod="openstack/rabbitmq-server-0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.815036 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"a9a6207a-78de-492d-8c88-9a1d2a6f703d\") " pod="openstack/rabbitmq-server-0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.815210 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a9a6207a-78de-492d-8c88-9a1d2a6f703d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a9a6207a-78de-492d-8c88-9a1d2a6f703d\") " pod="openstack/rabbitmq-server-0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.815334 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a9a6207a-78de-492d-8c88-9a1d2a6f703d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a9a6207a-78de-492d-8c88-9a1d2a6f703d\") " pod="openstack/rabbitmq-server-0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.814738 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9a6207a-78de-492d-8c88-9a1d2a6f703d-config-data\") pod \"rabbitmq-server-0\" (UID: \"a9a6207a-78de-492d-8c88-9a1d2a6f703d\") " pod="openstack/rabbitmq-server-0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.815474 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a9a6207a-78de-492d-8c88-9a1d2a6f703d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a9a6207a-78de-492d-8c88-9a1d2a6f703d\") " pod="openstack/rabbitmq-server-0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.815594 4706 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"a9a6207a-78de-492d-8c88-9a1d2a6f703d\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.814772 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a9a6207a-78de-492d-8c88-9a1d2a6f703d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a9a6207a-78de-492d-8c88-9a1d2a6f703d\") " pod="openstack/rabbitmq-server-0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.816039 4706 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.816070 4706 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/557c84e6-ab5c-40c1-a3e1-68b513874f9b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.816279 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a9a6207a-78de-492d-8c88-9a1d2a6f703d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a9a6207a-78de-492d-8c88-9a1d2a6f703d\") " pod="openstack/rabbitmq-server-0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.816385 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a9a6207a-78de-492d-8c88-9a1d2a6f703d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a9a6207a-78de-492d-8c88-9a1d2a6f703d\") " pod="openstack/rabbitmq-server-0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.816431 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a9a6207a-78de-492d-8c88-9a1d2a6f703d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a9a6207a-78de-492d-8c88-9a1d2a6f703d\") " pod="openstack/rabbitmq-server-0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.817356 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a9a6207a-78de-492d-8c88-9a1d2a6f703d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a9a6207a-78de-492d-8c88-9a1d2a6f703d\") " pod="openstack/rabbitmq-server-0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.817743 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a9a6207a-78de-492d-8c88-9a1d2a6f703d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a9a6207a-78de-492d-8c88-9a1d2a6f703d\") " pod="openstack/rabbitmq-server-0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.818735 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a9a6207a-78de-492d-8c88-9a1d2a6f703d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a9a6207a-78de-492d-8c88-9a1d2a6f703d\") " pod="openstack/rabbitmq-server-0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.819824 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a9a6207a-78de-492d-8c88-9a1d2a6f703d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a9a6207a-78de-492d-8c88-9a1d2a6f703d\") " pod="openstack/rabbitmq-server-0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.833443 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6gcf\" (UniqueName: \"kubernetes.io/projected/a9a6207a-78de-492d-8c88-9a1d2a6f703d-kube-api-access-t6gcf\") pod \"rabbitmq-server-0\" (UID: \"a9a6207a-78de-492d-8c88-9a1d2a6f703d\") " pod="openstack/rabbitmq-server-0" Nov 25 11:59:54 crc kubenswrapper[4706]: I1125 11:59:54.844413 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"a9a6207a-78de-492d-8c88-9a1d2a6f703d\") " pod="openstack/rabbitmq-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.027139 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.505657 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.512564 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.743403 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.759676 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.792962 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.794732 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.797484 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.798000 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.798159 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.798351 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.802423 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-b2nhx" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.802645 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.802876 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.808656 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.838154 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hdvl\" (UniqueName: \"kubernetes.io/projected/6ea2e87f-dc81-49cc-81a8-e08a8ed11f12-kube-api-access-9hdvl\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.838231 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6ea2e87f-dc81-49cc-81a8-e08a8ed11f12-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.838326 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6ea2e87f-dc81-49cc-81a8-e08a8ed11f12-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.838426 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6ea2e87f-dc81-49cc-81a8-e08a8ed11f12-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.838463 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6ea2e87f-dc81-49cc-81a8-e08a8ed11f12-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.838497 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6ea2e87f-dc81-49cc-81a8-e08a8ed11f12-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.838520 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.838548 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6ea2e87f-dc81-49cc-81a8-e08a8ed11f12-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.838608 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6ea2e87f-dc81-49cc-81a8-e08a8ed11f12-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.838635 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ea2e87f-dc81-49cc-81a8-e08a8ed11f12-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.838658 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6ea2e87f-dc81-49cc-81a8-e08a8ed11f12-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.940921 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.940976 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6ea2e87f-dc81-49cc-81a8-e08a8ed11f12-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.941029 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6ea2e87f-dc81-49cc-81a8-e08a8ed11f12-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.941052 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ea2e87f-dc81-49cc-81a8-e08a8ed11f12-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.941072 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6ea2e87f-dc81-49cc-81a8-e08a8ed11f12-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.941113 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hdvl\" (UniqueName: \"kubernetes.io/projected/6ea2e87f-dc81-49cc-81a8-e08a8ed11f12-kube-api-access-9hdvl\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.941136 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6ea2e87f-dc81-49cc-81a8-e08a8ed11f12-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.941181 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6ea2e87f-dc81-49cc-81a8-e08a8ed11f12-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.941243 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6ea2e87f-dc81-49cc-81a8-e08a8ed11f12-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.941270 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6ea2e87f-dc81-49cc-81a8-e08a8ed11f12-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.941319 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6ea2e87f-dc81-49cc-81a8-e08a8ed11f12-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.941743 4706 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.943672 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6ea2e87f-dc81-49cc-81a8-e08a8ed11f12-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.944085 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6ea2e87f-dc81-49cc-81a8-e08a8ed11f12-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.944108 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6ea2e87f-dc81-49cc-81a8-e08a8ed11f12-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.946740 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ea2e87f-dc81-49cc-81a8-e08a8ed11f12-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.947666 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6ea2e87f-dc81-49cc-81a8-e08a8ed11f12-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.952335 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="557c84e6-ab5c-40c1-a3e1-68b513874f9b" path="/var/lib/kubelet/pods/557c84e6-ab5c-40c1-a3e1-68b513874f9b/volumes" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.956277 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6ea2e87f-dc81-49cc-81a8-e08a8ed11f12-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.958030 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed6df424-6b86-44a1-8157-ca1f33167065" path="/var/lib/kubelet/pods/ed6df424-6b86-44a1-8157-ca1f33167065/volumes" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.959400 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6ea2e87f-dc81-49cc-81a8-e08a8ed11f12-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.961996 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6ea2e87f-dc81-49cc-81a8-e08a8ed11f12-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.965331 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hdvl\" (UniqueName: \"kubernetes.io/projected/6ea2e87f-dc81-49cc-81a8-e08a8ed11f12-kube-api-access-9hdvl\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.976686 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6ea2e87f-dc81-49cc-81a8-e08a8ed11f12-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:55 crc kubenswrapper[4706]: I1125 11:59:55.996837 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:56 crc kubenswrapper[4706]: I1125 11:59:56.118455 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 11:59:56 crc kubenswrapper[4706]: I1125 11:59:56.518754 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a9a6207a-78de-492d-8c88-9a1d2a6f703d","Type":"ContainerStarted","Data":"2c142541dcc01820cfe326e978d749cec0eba6cc1eac78a378f941c419ae47bd"} Nov 25 11:59:56 crc kubenswrapper[4706]: I1125 11:59:56.566170 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 11:59:57 crc kubenswrapper[4706]: I1125 11:59:57.532306 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12","Type":"ContainerStarted","Data":"84d9b14540c1f6620cd843b930e198d31e6a53518afbf14127426bf5ea2a274a"} Nov 25 11:59:58 crc kubenswrapper[4706]: I1125 11:59:58.550725 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a9a6207a-78de-492d-8c88-9a1d2a6f703d","Type":"ContainerStarted","Data":"405b0d15166403ea1ce5a749ae926d8356a8fac2e09af39d61b3432832a696ce"} Nov 25 11:59:59 crc kubenswrapper[4706]: I1125 11:59:59.561666 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12","Type":"ContainerStarted","Data":"9d199e7b84675fe385047dec9097ed09b0ada23ee15c70d716efce250b562877"} Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.159757 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401200-kx9b7"] Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.161797 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401200-kx9b7" Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.164172 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.164734 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.173088 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401200-kx9b7"] Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.222015 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a3962fd-978c-4b10-9dfc-19e83a738f9c-secret-volume\") pod \"collect-profiles-29401200-kx9b7\" (UID: \"6a3962fd-978c-4b10-9dfc-19e83a738f9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401200-kx9b7" Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.222074 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsgmw\" (UniqueName: \"kubernetes.io/projected/6a3962fd-978c-4b10-9dfc-19e83a738f9c-kube-api-access-bsgmw\") pod \"collect-profiles-29401200-kx9b7\" (UID: \"6a3962fd-978c-4b10-9dfc-19e83a738f9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401200-kx9b7" Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.222173 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a3962fd-978c-4b10-9dfc-19e83a738f9c-config-volume\") pod \"collect-profiles-29401200-kx9b7\" (UID: \"6a3962fd-978c-4b10-9dfc-19e83a738f9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401200-kx9b7" Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.323180 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a3962fd-978c-4b10-9dfc-19e83a738f9c-secret-volume\") pod \"collect-profiles-29401200-kx9b7\" (UID: \"6a3962fd-978c-4b10-9dfc-19e83a738f9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401200-kx9b7" Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.323244 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsgmw\" (UniqueName: \"kubernetes.io/projected/6a3962fd-978c-4b10-9dfc-19e83a738f9c-kube-api-access-bsgmw\") pod \"collect-profiles-29401200-kx9b7\" (UID: \"6a3962fd-978c-4b10-9dfc-19e83a738f9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401200-kx9b7" Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.323337 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a3962fd-978c-4b10-9dfc-19e83a738f9c-config-volume\") pod \"collect-profiles-29401200-kx9b7\" (UID: \"6a3962fd-978c-4b10-9dfc-19e83a738f9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401200-kx9b7" Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.324458 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a3962fd-978c-4b10-9dfc-19e83a738f9c-config-volume\") pod \"collect-profiles-29401200-kx9b7\" (UID: \"6a3962fd-978c-4b10-9dfc-19e83a738f9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401200-kx9b7" Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.329132 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a3962fd-978c-4b10-9dfc-19e83a738f9c-secret-volume\") pod \"collect-profiles-29401200-kx9b7\" (UID: \"6a3962fd-978c-4b10-9dfc-19e83a738f9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401200-kx9b7" Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.339589 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsgmw\" (UniqueName: \"kubernetes.io/projected/6a3962fd-978c-4b10-9dfc-19e83a738f9c-kube-api-access-bsgmw\") pod \"collect-profiles-29401200-kx9b7\" (UID: \"6a3962fd-978c-4b10-9dfc-19e83a738f9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401200-kx9b7" Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.488452 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-9s22r"] Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.490386 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-9s22r" Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.494360 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.514907 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-9s22r"] Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.519716 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401200-kx9b7" Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.542813 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-9s22r\" (UID: \"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-9s22r" Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.543120 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvxln\" (UniqueName: \"kubernetes.io/projected/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-kube-api-access-cvxln\") pod \"dnsmasq-dns-79bd4cc8c9-9s22r\" (UID: \"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-9s22r" Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.543313 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-9s22r\" (UID: \"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-9s22r" Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.543363 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-config\") pod \"dnsmasq-dns-79bd4cc8c9-9s22r\" (UID: \"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-9s22r" Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.543448 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-9s22r\" (UID: \"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-9s22r" Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.543575 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-9s22r\" (UID: \"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-9s22r" Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.543679 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-9s22r\" (UID: \"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-9s22r" Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.644934 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-9s22r\" (UID: \"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-9s22r" Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.645124 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvxln\" (UniqueName: \"kubernetes.io/projected/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-kube-api-access-cvxln\") pod \"dnsmasq-dns-79bd4cc8c9-9s22r\" (UID: \"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-9s22r" Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.645219 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-9s22r\" (UID: \"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-9s22r" Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.645252 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-config\") pod \"dnsmasq-dns-79bd4cc8c9-9s22r\" (UID: \"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-9s22r" Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.645313 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-9s22r\" (UID: \"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-9s22r" Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.645373 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-9s22r\" (UID: \"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-9s22r" Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.645426 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-9s22r\" (UID: \"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-9s22r" Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.646567 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-9s22r\" (UID: \"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-9s22r" Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.646611 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-9s22r\" (UID: \"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-9s22r" Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.647230 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-9s22r\" (UID: \"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-9s22r" Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.647958 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-config\") pod \"dnsmasq-dns-79bd4cc8c9-9s22r\" (UID: \"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-9s22r" Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.648706 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-9s22r\" (UID: \"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-9s22r" Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.648833 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-9s22r\" (UID: \"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-9s22r" Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.669171 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvxln\" (UniqueName: \"kubernetes.io/projected/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-kube-api-access-cvxln\") pod \"dnsmasq-dns-79bd4cc8c9-9s22r\" (UID: \"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-9s22r" Nov 25 12:00:00 crc kubenswrapper[4706]: I1125 12:00:00.812858 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-9s22r" Nov 25 12:00:01 crc kubenswrapper[4706]: I1125 12:00:01.011844 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401200-kx9b7"] Nov 25 12:00:01 crc kubenswrapper[4706]: W1125 12:00:01.014429 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a3962fd_978c_4b10_9dfc_19e83a738f9c.slice/crio-98cc321d24818bb1a375418300b25d20f100cb371ea8795b8e682abf10c5efa2 WatchSource:0}: Error finding container 98cc321d24818bb1a375418300b25d20f100cb371ea8795b8e682abf10c5efa2: Status 404 returned error can't find the container with id 98cc321d24818bb1a375418300b25d20f100cb371ea8795b8e682abf10c5efa2 Nov 25 12:00:01 crc kubenswrapper[4706]: I1125 12:00:01.243405 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-9s22r"] Nov 25 12:00:01 crc kubenswrapper[4706]: W1125 12:00:01.248867 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccc8ee8d_5b46_49fa_b797_f4ae80cfe5da.slice/crio-c1dc45c31f6e3f03505f688e33a044cb96a1badb7fbd00c060e330402377d5e8 WatchSource:0}: Error finding container c1dc45c31f6e3f03505f688e33a044cb96a1badb7fbd00c060e330402377d5e8: Status 404 returned error can't find the container with id c1dc45c31f6e3f03505f688e33a044cb96a1badb7fbd00c060e330402377d5e8 Nov 25 12:00:01 crc kubenswrapper[4706]: I1125 12:00:01.580815 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-9s22r" event={"ID":"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da","Type":"ContainerStarted","Data":"c1dc45c31f6e3f03505f688e33a044cb96a1badb7fbd00c060e330402377d5e8"} Nov 25 12:00:01 crc kubenswrapper[4706]: I1125 12:00:01.582799 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401200-kx9b7" event={"ID":"6a3962fd-978c-4b10-9dfc-19e83a738f9c","Type":"ContainerStarted","Data":"1531a26ae612faff3acdfdcf02e009f0b100b31157cd5ebab990de2005370a84"} Nov 25 12:00:01 crc kubenswrapper[4706]: I1125 12:00:01.582823 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401200-kx9b7" event={"ID":"6a3962fd-978c-4b10-9dfc-19e83a738f9c","Type":"ContainerStarted","Data":"98cc321d24818bb1a375418300b25d20f100cb371ea8795b8e682abf10c5efa2"} Nov 25 12:00:02 crc kubenswrapper[4706]: I1125 12:00:02.593159 4706 generic.go:334] "Generic (PLEG): container finished" podID="ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da" containerID="4b42b744f794227f955c967e30a0ccb8dc7f089fce42817e02e89f7e0a3dfaed" exitCode=0 Nov 25 12:00:02 crc kubenswrapper[4706]: I1125 12:00:02.593240 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-9s22r" event={"ID":"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da","Type":"ContainerDied","Data":"4b42b744f794227f955c967e30a0ccb8dc7f089fce42817e02e89f7e0a3dfaed"} Nov 25 12:00:02 crc kubenswrapper[4706]: I1125 12:00:02.595607 4706 generic.go:334] "Generic (PLEG): container finished" podID="6a3962fd-978c-4b10-9dfc-19e83a738f9c" containerID="1531a26ae612faff3acdfdcf02e009f0b100b31157cd5ebab990de2005370a84" exitCode=0 Nov 25 12:00:02 crc kubenswrapper[4706]: I1125 12:00:02.595676 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401200-kx9b7" event={"ID":"6a3962fd-978c-4b10-9dfc-19e83a738f9c","Type":"ContainerDied","Data":"1531a26ae612faff3acdfdcf02e009f0b100b31157cd5ebab990de2005370a84"} Nov 25 12:00:03 crc kubenswrapper[4706]: I1125 12:00:03.617884 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-9s22r" event={"ID":"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da","Type":"ContainerStarted","Data":"7e431cb9c6bac547fc698b4496940ce1908f0b85b3d947d5dbee648b33a819c9"} Nov 25 12:00:03 crc kubenswrapper[4706]: I1125 12:00:03.618281 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-9s22r" Nov 25 12:00:03 crc kubenswrapper[4706]: I1125 12:00:03.645448 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-9s22r" podStartSLOduration=3.6454293460000002 podStartE2EDuration="3.645429346s" podCreationTimestamp="2025-11-25 12:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:00:03.644714648 +0000 UTC m=+1412.559272029" watchObservedRunningTime="2025-11-25 12:00:03.645429346 +0000 UTC m=+1412.559986717" Nov 25 12:00:03 crc kubenswrapper[4706]: I1125 12:00:03.968159 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401200-kx9b7" Nov 25 12:00:04 crc kubenswrapper[4706]: I1125 12:00:04.035455 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsgmw\" (UniqueName: \"kubernetes.io/projected/6a3962fd-978c-4b10-9dfc-19e83a738f9c-kube-api-access-bsgmw\") pod \"6a3962fd-978c-4b10-9dfc-19e83a738f9c\" (UID: \"6a3962fd-978c-4b10-9dfc-19e83a738f9c\") " Nov 25 12:00:04 crc kubenswrapper[4706]: I1125 12:00:04.035851 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a3962fd-978c-4b10-9dfc-19e83a738f9c-secret-volume\") pod \"6a3962fd-978c-4b10-9dfc-19e83a738f9c\" (UID: \"6a3962fd-978c-4b10-9dfc-19e83a738f9c\") " Nov 25 12:00:04 crc kubenswrapper[4706]: I1125 12:00:04.036065 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a3962fd-978c-4b10-9dfc-19e83a738f9c-config-volume\") pod \"6a3962fd-978c-4b10-9dfc-19e83a738f9c\" (UID: \"6a3962fd-978c-4b10-9dfc-19e83a738f9c\") " Nov 25 12:00:04 crc kubenswrapper[4706]: I1125 12:00:04.036610 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a3962fd-978c-4b10-9dfc-19e83a738f9c-config-volume" (OuterVolumeSpecName: "config-volume") pod "6a3962fd-978c-4b10-9dfc-19e83a738f9c" (UID: "6a3962fd-978c-4b10-9dfc-19e83a738f9c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:00:04 crc kubenswrapper[4706]: I1125 12:00:04.040615 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a3962fd-978c-4b10-9dfc-19e83a738f9c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6a3962fd-978c-4b10-9dfc-19e83a738f9c" (UID: "6a3962fd-978c-4b10-9dfc-19e83a738f9c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:00:04 crc kubenswrapper[4706]: I1125 12:00:04.040693 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a3962fd-978c-4b10-9dfc-19e83a738f9c-kube-api-access-bsgmw" (OuterVolumeSpecName: "kube-api-access-bsgmw") pod "6a3962fd-978c-4b10-9dfc-19e83a738f9c" (UID: "6a3962fd-978c-4b10-9dfc-19e83a738f9c"). InnerVolumeSpecName "kube-api-access-bsgmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:00:04 crc kubenswrapper[4706]: I1125 12:00:04.137856 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsgmw\" (UniqueName: \"kubernetes.io/projected/6a3962fd-978c-4b10-9dfc-19e83a738f9c-kube-api-access-bsgmw\") on node \"crc\" DevicePath \"\"" Nov 25 12:00:04 crc kubenswrapper[4706]: I1125 12:00:04.137888 4706 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a3962fd-978c-4b10-9dfc-19e83a738f9c-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 12:00:04 crc kubenswrapper[4706]: I1125 12:00:04.137900 4706 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a3962fd-978c-4b10-9dfc-19e83a738f9c-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 12:00:04 crc kubenswrapper[4706]: I1125 12:00:04.628417 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401200-kx9b7" Nov 25 12:00:04 crc kubenswrapper[4706]: I1125 12:00:04.628420 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401200-kx9b7" event={"ID":"6a3962fd-978c-4b10-9dfc-19e83a738f9c","Type":"ContainerDied","Data":"98cc321d24818bb1a375418300b25d20f100cb371ea8795b8e682abf10c5efa2"} Nov 25 12:00:04 crc kubenswrapper[4706]: I1125 12:00:04.628784 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98cc321d24818bb1a375418300b25d20f100cb371ea8795b8e682abf10c5efa2" Nov 25 12:00:10 crc kubenswrapper[4706]: I1125 12:00:10.815578 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-9s22r" Nov 25 12:00:10 crc kubenswrapper[4706]: I1125 12:00:10.888048 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-d789x"] Nov 25 12:00:10 crc kubenswrapper[4706]: I1125 12:00:10.888320 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-d789x" podUID="2fa42f2c-560b-4494-9cce-6389eae6be11" containerName="dnsmasq-dns" containerID="cri-o://7518f11f8c9365e67b6a8e516cb7efa9ec0eabeb14fc12451786d58497e93db6" gracePeriod=10 Nov 25 12:00:11 crc kubenswrapper[4706]: I1125 12:00:11.023937 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55478c4467-777cf"] Nov 25 12:00:11 crc kubenswrapper[4706]: E1125 12:00:11.024337 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a3962fd-978c-4b10-9dfc-19e83a738f9c" containerName="collect-profiles" Nov 25 12:00:11 crc kubenswrapper[4706]: I1125 12:00:11.024351 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a3962fd-978c-4b10-9dfc-19e83a738f9c" containerName="collect-profiles" Nov 25 12:00:11 crc kubenswrapper[4706]: I1125 12:00:11.024554 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a3962fd-978c-4b10-9dfc-19e83a738f9c" containerName="collect-profiles" Nov 25 12:00:11 crc kubenswrapper[4706]: I1125 12:00:11.025514 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-777cf" Nov 25 12:00:11 crc kubenswrapper[4706]: I1125 12:00:11.033127 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-777cf"] Nov 25 12:00:11 crc kubenswrapper[4706]: I1125 12:00:11.075356 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ab6dcdf-bba1-4c4c-aa91-47a06fd22366-config\") pod \"dnsmasq-dns-55478c4467-777cf\" (UID: \"3ab6dcdf-bba1-4c4c-aa91-47a06fd22366\") " pod="openstack/dnsmasq-dns-55478c4467-777cf" Nov 25 12:00:11 crc kubenswrapper[4706]: I1125 12:00:11.075420 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm9cp\" (UniqueName: \"kubernetes.io/projected/3ab6dcdf-bba1-4c4c-aa91-47a06fd22366-kube-api-access-qm9cp\") pod \"dnsmasq-dns-55478c4467-777cf\" (UID: \"3ab6dcdf-bba1-4c4c-aa91-47a06fd22366\") " pod="openstack/dnsmasq-dns-55478c4467-777cf" Nov 25 12:00:11 crc kubenswrapper[4706]: I1125 12:00:11.075511 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ab6dcdf-bba1-4c4c-aa91-47a06fd22366-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-777cf\" (UID: \"3ab6dcdf-bba1-4c4c-aa91-47a06fd22366\") " pod="openstack/dnsmasq-dns-55478c4467-777cf" Nov 25 12:00:11 crc kubenswrapper[4706]: I1125 12:00:11.075547 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ab6dcdf-bba1-4c4c-aa91-47a06fd22366-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-777cf\" (UID: \"3ab6dcdf-bba1-4c4c-aa91-47a06fd22366\") " pod="openstack/dnsmasq-dns-55478c4467-777cf" Nov 25 12:00:11 crc kubenswrapper[4706]: I1125 12:00:11.075588 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ab6dcdf-bba1-4c4c-aa91-47a06fd22366-dns-svc\") pod \"dnsmasq-dns-55478c4467-777cf\" (UID: \"3ab6dcdf-bba1-4c4c-aa91-47a06fd22366\") " pod="openstack/dnsmasq-dns-55478c4467-777cf" Nov 25 12:00:11 crc kubenswrapper[4706]: I1125 12:00:11.075606 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3ab6dcdf-bba1-4c4c-aa91-47a06fd22366-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-777cf\" (UID: \"3ab6dcdf-bba1-4c4c-aa91-47a06fd22366\") " pod="openstack/dnsmasq-dns-55478c4467-777cf" Nov 25 12:00:11 crc kubenswrapper[4706]: I1125 12:00:11.075622 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ab6dcdf-bba1-4c4c-aa91-47a06fd22366-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-777cf\" (UID: \"3ab6dcdf-bba1-4c4c-aa91-47a06fd22366\") " pod="openstack/dnsmasq-dns-55478c4467-777cf" Nov 25 12:00:11 crc kubenswrapper[4706]: I1125 12:00:11.177743 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ab6dcdf-bba1-4c4c-aa91-47a06fd22366-config\") pod \"dnsmasq-dns-55478c4467-777cf\" (UID: \"3ab6dcdf-bba1-4c4c-aa91-47a06fd22366\") " pod="openstack/dnsmasq-dns-55478c4467-777cf" Nov 25 12:00:11 crc kubenswrapper[4706]: I1125 12:00:11.177829 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm9cp\" (UniqueName: \"kubernetes.io/projected/3ab6dcdf-bba1-4c4c-aa91-47a06fd22366-kube-api-access-qm9cp\") pod \"dnsmasq-dns-55478c4467-777cf\" (UID: \"3ab6dcdf-bba1-4c4c-aa91-47a06fd22366\") " pod="openstack/dnsmasq-dns-55478c4467-777cf" Nov 25 12:00:11 crc kubenswrapper[4706]: I1125 12:00:11.177964 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ab6dcdf-bba1-4c4c-aa91-47a06fd22366-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-777cf\" (UID: \"3ab6dcdf-bba1-4c4c-aa91-47a06fd22366\") " pod="openstack/dnsmasq-dns-55478c4467-777cf" Nov 25 12:00:11 crc kubenswrapper[4706]: I1125 12:00:11.178007 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ab6dcdf-bba1-4c4c-aa91-47a06fd22366-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-777cf\" (UID: \"3ab6dcdf-bba1-4c4c-aa91-47a06fd22366\") " pod="openstack/dnsmasq-dns-55478c4467-777cf" Nov 25 12:00:11 crc kubenswrapper[4706]: I1125 12:00:11.178086 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ab6dcdf-bba1-4c4c-aa91-47a06fd22366-dns-svc\") pod \"dnsmasq-dns-55478c4467-777cf\" (UID: \"3ab6dcdf-bba1-4c4c-aa91-47a06fd22366\") " pod="openstack/dnsmasq-dns-55478c4467-777cf" Nov 25 12:00:11 crc kubenswrapper[4706]: I1125 12:00:11.178134 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ab6dcdf-bba1-4c4c-aa91-47a06fd22366-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-777cf\" (UID: \"3ab6dcdf-bba1-4c4c-aa91-47a06fd22366\") " pod="openstack/dnsmasq-dns-55478c4467-777cf" Nov 25 12:00:11 crc kubenswrapper[4706]: I1125 12:00:11.178159 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3ab6dcdf-bba1-4c4c-aa91-47a06fd22366-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-777cf\" (UID: \"3ab6dcdf-bba1-4c4c-aa91-47a06fd22366\") " pod="openstack/dnsmasq-dns-55478c4467-777cf" Nov 25 12:00:11 crc kubenswrapper[4706]: I1125 12:00:11.178917 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ab6dcdf-bba1-4c4c-aa91-47a06fd22366-config\") pod \"dnsmasq-dns-55478c4467-777cf\" (UID: \"3ab6dcdf-bba1-4c4c-aa91-47a06fd22366\") " pod="openstack/dnsmasq-dns-55478c4467-777cf" Nov 25 12:00:11 crc kubenswrapper[4706]: I1125 12:00:11.179639 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3ab6dcdf-bba1-4c4c-aa91-47a06fd22366-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-777cf\" (UID: \"3ab6dcdf-bba1-4c4c-aa91-47a06fd22366\") " pod="openstack/dnsmasq-dns-55478c4467-777cf" Nov 25 12:00:11 crc kubenswrapper[4706]: I1125 12:00:11.179729 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ab6dcdf-bba1-4c4c-aa91-47a06fd22366-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-777cf\" (UID: \"3ab6dcdf-bba1-4c4c-aa91-47a06fd22366\") " pod="openstack/dnsmasq-dns-55478c4467-777cf" Nov 25 12:00:11 crc kubenswrapper[4706]: I1125 12:00:11.180381 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ab6dcdf-bba1-4c4c-aa91-47a06fd22366-dns-svc\") pod \"dnsmasq-dns-55478c4467-777cf\" (UID: \"3ab6dcdf-bba1-4c4c-aa91-47a06fd22366\") " pod="openstack/dnsmasq-dns-55478c4467-777cf" Nov 25 12:00:11 crc kubenswrapper[4706]: I1125 12:00:11.180447 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ab6dcdf-bba1-4c4c-aa91-47a06fd22366-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-777cf\" (UID: \"3ab6dcdf-bba1-4c4c-aa91-47a06fd22366\") " pod="openstack/dnsmasq-dns-55478c4467-777cf" Nov 25 12:00:11 crc kubenswrapper[4706]: I1125 12:00:11.180920 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ab6dcdf-bba1-4c4c-aa91-47a06fd22366-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-777cf\" (UID: \"3ab6dcdf-bba1-4c4c-aa91-47a06fd22366\") " pod="openstack/dnsmasq-dns-55478c4467-777cf" Nov 25 12:00:11 crc kubenswrapper[4706]: I1125 12:00:11.201460 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm9cp\" (UniqueName: \"kubernetes.io/projected/3ab6dcdf-bba1-4c4c-aa91-47a06fd22366-kube-api-access-qm9cp\") pod \"dnsmasq-dns-55478c4467-777cf\" (UID: \"3ab6dcdf-bba1-4c4c-aa91-47a06fd22366\") " pod="openstack/dnsmasq-dns-55478c4467-777cf" Nov 25 12:00:11 crc kubenswrapper[4706]: I1125 12:00:11.362141 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-777cf" Nov 25 12:00:14 crc kubenswrapper[4706]: I1125 12:00:11.698548 4706 generic.go:334] "Generic (PLEG): container finished" podID="2fa42f2c-560b-4494-9cce-6389eae6be11" containerID="7518f11f8c9365e67b6a8e516cb7efa9ec0eabeb14fc12451786d58497e93db6" exitCode=0 Nov 25 12:00:14 crc kubenswrapper[4706]: I1125 12:00:11.698644 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-d789x" event={"ID":"2fa42f2c-560b-4494-9cce-6389eae6be11","Type":"ContainerDied","Data":"7518f11f8c9365e67b6a8e516cb7efa9ec0eabeb14fc12451786d58497e93db6"} Nov 25 12:00:14 crc kubenswrapper[4706]: W1125 12:00:11.822825 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ab6dcdf_bba1_4c4c_aa91_47a06fd22366.slice/crio-848212ec6e0d30a8bf136f35d2a52e4fb3a659b12a0d83ff9417717aaa3e6b31 WatchSource:0}: Error finding container 848212ec6e0d30a8bf136f35d2a52e4fb3a659b12a0d83ff9417717aaa3e6b31: Status 404 returned error can't find the container with id 848212ec6e0d30a8bf136f35d2a52e4fb3a659b12a0d83ff9417717aaa3e6b31 Nov 25 12:00:14 crc kubenswrapper[4706]: I1125 12:00:11.823529 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-777cf"] Nov 25 12:00:14 crc kubenswrapper[4706]: I1125 12:00:12.623547 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-89c5cd4d5-d789x" podUID="2fa42f2c-560b-4494-9cce-6389eae6be11" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.199:5353: connect: connection refused" Nov 25 12:00:14 crc kubenswrapper[4706]: I1125 12:00:12.708902 4706 generic.go:334] "Generic (PLEG): container finished" podID="3ab6dcdf-bba1-4c4c-aa91-47a06fd22366" containerID="4ac42e6c42cea5608331f8673020cc8930395672476cae0acf5d2ee4837e2049" exitCode=0 Nov 25 12:00:14 crc kubenswrapper[4706]: I1125 12:00:12.708953 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-777cf" event={"ID":"3ab6dcdf-bba1-4c4c-aa91-47a06fd22366","Type":"ContainerDied","Data":"4ac42e6c42cea5608331f8673020cc8930395672476cae0acf5d2ee4837e2049"} Nov 25 12:00:14 crc kubenswrapper[4706]: I1125 12:00:12.709011 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-777cf" event={"ID":"3ab6dcdf-bba1-4c4c-aa91-47a06fd22366","Type":"ContainerStarted","Data":"848212ec6e0d30a8bf136f35d2a52e4fb3a659b12a0d83ff9417717aaa3e6b31"} Nov 25 12:00:14 crc kubenswrapper[4706]: I1125 12:00:13.720662 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-777cf" event={"ID":"3ab6dcdf-bba1-4c4c-aa91-47a06fd22366","Type":"ContainerStarted","Data":"fce8026ca39b812003e04c031063e6027a86c57f8091803726f2edad838e7527"} Nov 25 12:00:14 crc kubenswrapper[4706]: I1125 12:00:13.721004 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55478c4467-777cf" Nov 25 12:00:14 crc kubenswrapper[4706]: I1125 12:00:13.751702 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55478c4467-777cf" podStartSLOduration=3.751684175 podStartE2EDuration="3.751684175s" podCreationTimestamp="2025-11-25 12:00:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:00:13.74712202 +0000 UTC m=+1422.661679421" watchObservedRunningTime="2025-11-25 12:00:13.751684175 +0000 UTC m=+1422.666241546" Nov 25 12:00:14 crc kubenswrapper[4706]: I1125 12:00:14.675618 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-d789x" Nov 25 12:00:14 crc kubenswrapper[4706]: I1125 12:00:14.730088 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-d789x" Nov 25 12:00:14 crc kubenswrapper[4706]: I1125 12:00:14.730100 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-d789x" event={"ID":"2fa42f2c-560b-4494-9cce-6389eae6be11","Type":"ContainerDied","Data":"a16cdf0325352b68b60183b9b0f477adb2de38423cd622432d5e03a789b197c9"} Nov 25 12:00:14 crc kubenswrapper[4706]: I1125 12:00:14.730201 4706 scope.go:117] "RemoveContainer" containerID="7518f11f8c9365e67b6a8e516cb7efa9ec0eabeb14fc12451786d58497e93db6" Nov 25 12:00:14 crc kubenswrapper[4706]: I1125 12:00:14.746669 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fa42f2c-560b-4494-9cce-6389eae6be11-config\") pod \"2fa42f2c-560b-4494-9cce-6389eae6be11\" (UID: \"2fa42f2c-560b-4494-9cce-6389eae6be11\") " Nov 25 12:00:14 crc kubenswrapper[4706]: I1125 12:00:14.746740 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt9v7\" (UniqueName: \"kubernetes.io/projected/2fa42f2c-560b-4494-9cce-6389eae6be11-kube-api-access-zt9v7\") pod \"2fa42f2c-560b-4494-9cce-6389eae6be11\" (UID: \"2fa42f2c-560b-4494-9cce-6389eae6be11\") " Nov 25 12:00:14 crc kubenswrapper[4706]: I1125 12:00:14.746809 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2fa42f2c-560b-4494-9cce-6389eae6be11-ovsdbserver-sb\") pod \"2fa42f2c-560b-4494-9cce-6389eae6be11\" (UID: \"2fa42f2c-560b-4494-9cce-6389eae6be11\") " Nov 25 12:00:14 crc kubenswrapper[4706]: I1125 12:00:14.746836 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2fa42f2c-560b-4494-9cce-6389eae6be11-dns-svc\") pod \"2fa42f2c-560b-4494-9cce-6389eae6be11\" (UID: \"2fa42f2c-560b-4494-9cce-6389eae6be11\") " Nov 25 12:00:14 crc kubenswrapper[4706]: I1125 12:00:14.746911 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2fa42f2c-560b-4494-9cce-6389eae6be11-dns-swift-storage-0\") pod \"2fa42f2c-560b-4494-9cce-6389eae6be11\" (UID: \"2fa42f2c-560b-4494-9cce-6389eae6be11\") " Nov 25 12:00:14 crc kubenswrapper[4706]: I1125 12:00:14.747035 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2fa42f2c-560b-4494-9cce-6389eae6be11-ovsdbserver-nb\") pod \"2fa42f2c-560b-4494-9cce-6389eae6be11\" (UID: \"2fa42f2c-560b-4494-9cce-6389eae6be11\") " Nov 25 12:00:14 crc kubenswrapper[4706]: I1125 12:00:14.752409 4706 scope.go:117] "RemoveContainer" containerID="0fbe29625555e82fec4c94d886c69dd38821e23b2e5893f52416c35186c28850" Nov 25 12:00:14 crc kubenswrapper[4706]: I1125 12:00:14.773722 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fa42f2c-560b-4494-9cce-6389eae6be11-kube-api-access-zt9v7" (OuterVolumeSpecName: "kube-api-access-zt9v7") pod "2fa42f2c-560b-4494-9cce-6389eae6be11" (UID: "2fa42f2c-560b-4494-9cce-6389eae6be11"). InnerVolumeSpecName "kube-api-access-zt9v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:00:14 crc kubenswrapper[4706]: I1125 12:00:14.829312 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fa42f2c-560b-4494-9cce-6389eae6be11-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2fa42f2c-560b-4494-9cce-6389eae6be11" (UID: "2fa42f2c-560b-4494-9cce-6389eae6be11"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:00:14 crc kubenswrapper[4706]: I1125 12:00:14.832737 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fa42f2c-560b-4494-9cce-6389eae6be11-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2fa42f2c-560b-4494-9cce-6389eae6be11" (UID: "2fa42f2c-560b-4494-9cce-6389eae6be11"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:00:14 crc kubenswrapper[4706]: I1125 12:00:14.834520 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fa42f2c-560b-4494-9cce-6389eae6be11-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2fa42f2c-560b-4494-9cce-6389eae6be11" (UID: "2fa42f2c-560b-4494-9cce-6389eae6be11"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:00:14 crc kubenswrapper[4706]: I1125 12:00:14.840000 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fa42f2c-560b-4494-9cce-6389eae6be11-config" (OuterVolumeSpecName: "config") pod "2fa42f2c-560b-4494-9cce-6389eae6be11" (UID: "2fa42f2c-560b-4494-9cce-6389eae6be11"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:00:14 crc kubenswrapper[4706]: I1125 12:00:14.844063 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fa42f2c-560b-4494-9cce-6389eae6be11-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2fa42f2c-560b-4494-9cce-6389eae6be11" (UID: "2fa42f2c-560b-4494-9cce-6389eae6be11"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:00:14 crc kubenswrapper[4706]: I1125 12:00:14.849032 4706 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2fa42f2c-560b-4494-9cce-6389eae6be11-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 12:00:14 crc kubenswrapper[4706]: I1125 12:00:14.849281 4706 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fa42f2c-560b-4494-9cce-6389eae6be11-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:00:14 crc kubenswrapper[4706]: I1125 12:00:14.849600 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt9v7\" (UniqueName: \"kubernetes.io/projected/2fa42f2c-560b-4494-9cce-6389eae6be11-kube-api-access-zt9v7\") on node \"crc\" DevicePath \"\"" Nov 25 12:00:14 crc kubenswrapper[4706]: I1125 12:00:14.849681 4706 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2fa42f2c-560b-4494-9cce-6389eae6be11-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 12:00:14 crc kubenswrapper[4706]: I1125 12:00:14.849759 4706 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2fa42f2c-560b-4494-9cce-6389eae6be11-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 12:00:14 crc kubenswrapper[4706]: I1125 12:00:14.849863 4706 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2fa42f2c-560b-4494-9cce-6389eae6be11-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.066564 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-d789x"] Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.074210 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-d789x"] Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.434339 4706 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 12:00:15 crc kubenswrapper[4706]: E1125 12:00:15.435014 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa42f2c-560b-4494-9cce-6389eae6be11" containerName="dnsmasq-dns" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.435039 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa42f2c-560b-4494-9cce-6389eae6be11" containerName="dnsmasq-dns" Nov 25 12:00:15 crc kubenswrapper[4706]: E1125 12:00:15.435056 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa42f2c-560b-4494-9cce-6389eae6be11" containerName="init" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.435068 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa42f2c-560b-4494-9cce-6389eae6be11" containerName="init" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.435528 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fa42f2c-560b-4494-9cce-6389eae6be11" containerName="dnsmasq-dns" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.436617 4706 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.436671 4706 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.436777 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:00:15 crc kubenswrapper[4706]: E1125 12:00:15.437477 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.437508 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 12:00:15 crc kubenswrapper[4706]: E1125 12:00:15.437530 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.437546 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 25 12:00:15 crc kubenswrapper[4706]: E1125 12:00:15.437608 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.437621 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 25 12:00:15 crc kubenswrapper[4706]: E1125 12:00:15.437654 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.437667 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 12:00:15 crc kubenswrapper[4706]: E1125 12:00:15.437694 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.437706 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 25 12:00:15 crc kubenswrapper[4706]: E1125 12:00:15.437723 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.437734 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 25 12:00:15 crc kubenswrapper[4706]: E1125 12:00:15.437759 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.437771 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.438244 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b" gracePeriod=15 Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.438292 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27" gracePeriod=15 Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.438240 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69" gracePeriod=15 Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.438282 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32" gracePeriod=15 Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.438281 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e" gracePeriod=15 Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.438504 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.438550 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.438562 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.438579 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.438598 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.438617 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.442682 4706 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.503665 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.567638 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.567716 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.567750 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.567813 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.567844 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.567928 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.567963 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.568001 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.669857 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.670291 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.670022 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.670353 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.670403 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.670467 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.670591 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.670627 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.670651 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.670772 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.670793 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.670829 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.670879 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.670939 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.670980 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.671068 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.792654 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:00:15 crc kubenswrapper[4706]: W1125 12:00:15.820586 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-21aa85fb41395dfe49c9347c52ac2a8b62644d272aae6d63c265f5ef1112bd9c WatchSource:0}: Error finding container 21aa85fb41395dfe49c9347c52ac2a8b62644d272aae6d63c265f5ef1112bd9c: Status 404 returned error can't find the container with id 21aa85fb41395dfe49c9347c52ac2a8b62644d272aae6d63c265f5ef1112bd9c Nov 25 12:00:15 crc kubenswrapper[4706]: E1125 12:00:15.824231 4706 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.13:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187b3e26e55e0751 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 12:00:15.823505233 +0000 UTC m=+1424.738062614,LastTimestamp:2025-11-25 12:00:15.823505233 +0000 UTC m=+1424.738062614,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 12:00:15 crc kubenswrapper[4706]: I1125 12:00:15.938814 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fa42f2c-560b-4494-9cce-6389eae6be11" path="/var/lib/kubelet/pods/2fa42f2c-560b-4494-9cce-6389eae6be11/volumes" Nov 25 12:00:16 crc kubenswrapper[4706]: I1125 12:00:16.756423 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"b881318ecf37c6c0877dc5bf960a14691cdc03852068e3d3e7e470ddb4562aa3"} Nov 25 12:00:16 crc kubenswrapper[4706]: I1125 12:00:16.756751 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"21aa85fb41395dfe49c9347c52ac2a8b62644d272aae6d63c265f5ef1112bd9c"} Nov 25 12:00:16 crc kubenswrapper[4706]: I1125 12:00:16.758869 4706 generic.go:334] "Generic (PLEG): container finished" podID="c2b01a11-ff6e-4718-9622-3cba2728d492" containerID="c81e514baec80ddbe304ab1081e5f9f6819d7d415ae13c82e6b417787d0d852e" exitCode=0 Nov 25 12:00:16 crc kubenswrapper[4706]: I1125 12:00:16.758926 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c2b01a11-ff6e-4718-9622-3cba2728d492","Type":"ContainerDied","Data":"c81e514baec80ddbe304ab1081e5f9f6819d7d415ae13c82e6b417787d0d852e"} Nov 25 12:00:16 crc kubenswrapper[4706]: I1125 12:00:16.759635 4706 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:16 crc kubenswrapper[4706]: I1125 12:00:16.759831 4706 status_manager.go:851] "Failed to get status for pod" podUID="c2b01a11-ff6e-4718-9622-3cba2728d492" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:17 crc kubenswrapper[4706]: I1125 12:00:17.122267 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 25 12:00:17 crc kubenswrapper[4706]: I1125 12:00:17.125216 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 12:00:17 crc kubenswrapper[4706]: I1125 12:00:17.126410 4706 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e" exitCode=0 Nov 25 12:00:17 crc kubenswrapper[4706]: I1125 12:00:17.126467 4706 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69" exitCode=0 Nov 25 12:00:17 crc kubenswrapper[4706]: I1125 12:00:17.126487 4706 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32" exitCode=0 Nov 25 12:00:17 crc kubenswrapper[4706]: I1125 12:00:17.126501 4706 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27" exitCode=2 Nov 25 12:00:17 crc kubenswrapper[4706]: I1125 12:00:17.126542 4706 scope.go:117] "RemoveContainer" containerID="333951d9a31cf3e7c1e98d27f636e2425f87cd082a8a5acae66533a76f5ad206" Nov 25 12:00:17 crc kubenswrapper[4706]: I1125 12:00:17.850659 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="04e7a5d0-b5fe-4a58-b015-339cc1218c6e" containerName="kube-state-metrics" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 25 12:00:18 crc kubenswrapper[4706]: I1125 12:00:18.140464 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 12:00:18 crc kubenswrapper[4706]: I1125 12:00:18.142176 4706 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:18 crc kubenswrapper[4706]: I1125 12:00:18.142385 4706 status_manager.go:851] "Failed to get status for pod" podUID="c2b01a11-ff6e-4718-9622-3cba2728d492" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:18 crc kubenswrapper[4706]: I1125 12:00:18.491446 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 12:00:18 crc kubenswrapper[4706]: I1125 12:00:18.492199 4706 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:18 crc kubenswrapper[4706]: I1125 12:00:18.492677 4706 status_manager.go:851] "Failed to get status for pod" podUID="c2b01a11-ff6e-4718-9622-3cba2728d492" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:18 crc kubenswrapper[4706]: I1125 12:00:18.647856 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c2b01a11-ff6e-4718-9622-3cba2728d492-var-lock\") pod \"c2b01a11-ff6e-4718-9622-3cba2728d492\" (UID: \"c2b01a11-ff6e-4718-9622-3cba2728d492\") " Nov 25 12:00:18 crc kubenswrapper[4706]: I1125 12:00:18.647963 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2b01a11-ff6e-4718-9622-3cba2728d492-var-lock" (OuterVolumeSpecName: "var-lock") pod "c2b01a11-ff6e-4718-9622-3cba2728d492" (UID: "c2b01a11-ff6e-4718-9622-3cba2728d492"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:00:18 crc kubenswrapper[4706]: I1125 12:00:18.648100 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c2b01a11-ff6e-4718-9622-3cba2728d492-kubelet-dir\") pod \"c2b01a11-ff6e-4718-9622-3cba2728d492\" (UID: \"c2b01a11-ff6e-4718-9622-3cba2728d492\") " Nov 25 12:00:18 crc kubenswrapper[4706]: I1125 12:00:18.648183 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2b01a11-ff6e-4718-9622-3cba2728d492-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c2b01a11-ff6e-4718-9622-3cba2728d492" (UID: "c2b01a11-ff6e-4718-9622-3cba2728d492"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:00:18 crc kubenswrapper[4706]: I1125 12:00:18.648219 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2b01a11-ff6e-4718-9622-3cba2728d492-kube-api-access\") pod \"c2b01a11-ff6e-4718-9622-3cba2728d492\" (UID: \"c2b01a11-ff6e-4718-9622-3cba2728d492\") " Nov 25 12:00:18 crc kubenswrapper[4706]: I1125 12:00:18.648693 4706 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c2b01a11-ff6e-4718-9622-3cba2728d492-var-lock\") on node \"crc\" DevicePath \"\"" Nov 25 12:00:18 crc kubenswrapper[4706]: I1125 12:00:18.648715 4706 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c2b01a11-ff6e-4718-9622-3cba2728d492-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 12:00:18 crc kubenswrapper[4706]: I1125 12:00:18.653954 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2b01a11-ff6e-4718-9622-3cba2728d492-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c2b01a11-ff6e-4718-9622-3cba2728d492" (UID: "c2b01a11-ff6e-4718-9622-3cba2728d492"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:00:18 crc kubenswrapper[4706]: I1125 12:00:18.750511 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2b01a11-ff6e-4718-9622-3cba2728d492-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 12:00:18 crc kubenswrapper[4706]: I1125 12:00:18.765941 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 12:00:18 crc kubenswrapper[4706]: I1125 12:00:18.766944 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:00:18 crc kubenswrapper[4706]: I1125 12:00:18.767481 4706 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:18 crc kubenswrapper[4706]: I1125 12:00:18.767793 4706 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:18 crc kubenswrapper[4706]: I1125 12:00:18.768150 4706 status_manager.go:851] "Failed to get status for pod" podUID="c2b01a11-ff6e-4718-9622-3cba2728d492" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:18 crc kubenswrapper[4706]: I1125 12:00:18.954273 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 25 12:00:18 crc kubenswrapper[4706]: I1125 12:00:18.954404 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 25 12:00:18 crc kubenswrapper[4706]: I1125 12:00:18.954435 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 25 12:00:18 crc kubenswrapper[4706]: I1125 12:00:18.955046 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:00:18 crc kubenswrapper[4706]: I1125 12:00:18.955082 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:00:18 crc kubenswrapper[4706]: I1125 12:00:18.955101 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:00:19 crc kubenswrapper[4706]: I1125 12:00:19.056241 4706 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Nov 25 12:00:19 crc kubenswrapper[4706]: I1125 12:00:19.056273 4706 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 25 12:00:19 crc kubenswrapper[4706]: I1125 12:00:19.056287 4706 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 25 12:00:19 crc kubenswrapper[4706]: I1125 12:00:19.155412 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 12:00:19 crc kubenswrapper[4706]: I1125 12:00:19.156190 4706 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b" exitCode=0 Nov 25 12:00:19 crc kubenswrapper[4706]: I1125 12:00:19.156368 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:00:19 crc kubenswrapper[4706]: I1125 12:00:19.156425 4706 scope.go:117] "RemoveContainer" containerID="db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e" Nov 25 12:00:19 crc kubenswrapper[4706]: I1125 12:00:19.161727 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c2b01a11-ff6e-4718-9622-3cba2728d492","Type":"ContainerDied","Data":"7a346c594a4024432a9a567698dda8d50c74aee300428af8a4dd1f47296286b2"} Nov 25 12:00:19 crc kubenswrapper[4706]: I1125 12:00:19.161769 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a346c594a4024432a9a567698dda8d50c74aee300428af8a4dd1f47296286b2" Nov 25 12:00:19 crc kubenswrapper[4706]: I1125 12:00:19.161782 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 12:00:19 crc kubenswrapper[4706]: I1125 12:00:19.178914 4706 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:19 crc kubenswrapper[4706]: I1125 12:00:19.179334 4706 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:19 crc kubenswrapper[4706]: I1125 12:00:19.179652 4706 status_manager.go:851] "Failed to get status for pod" podUID="c2b01a11-ff6e-4718-9622-3cba2728d492" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:19 crc kubenswrapper[4706]: I1125 12:00:19.180484 4706 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:19 crc kubenswrapper[4706]: I1125 12:00:19.180789 4706 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:19 crc kubenswrapper[4706]: I1125 12:00:19.181101 4706 status_manager.go:851] "Failed to get status for pod" podUID="c2b01a11-ff6e-4718-9622-3cba2728d492" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:19 crc kubenswrapper[4706]: I1125 12:00:19.221784 4706 scope.go:117] "RemoveContainer" containerID="fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69" Nov 25 12:00:19 crc kubenswrapper[4706]: I1125 12:00:19.269625 4706 scope.go:117] "RemoveContainer" containerID="24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32" Nov 25 12:00:19 crc kubenswrapper[4706]: I1125 12:00:19.309365 4706 scope.go:117] "RemoveContainer" containerID="c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27" Nov 25 12:00:19 crc kubenswrapper[4706]: I1125 12:00:19.330154 4706 scope.go:117] "RemoveContainer" containerID="86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b" Nov 25 12:00:19 crc kubenswrapper[4706]: I1125 12:00:19.357044 4706 scope.go:117] "RemoveContainer" containerID="ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8" Nov 25 12:00:19 crc kubenswrapper[4706]: I1125 12:00:19.414082 4706 scope.go:117] "RemoveContainer" containerID="db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e" Nov 25 12:00:19 crc kubenswrapper[4706]: E1125 12:00:19.414509 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\": container with ID starting with db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e not found: ID does not exist" containerID="db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e" Nov 25 12:00:19 crc kubenswrapper[4706]: I1125 12:00:19.414557 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e"} err="failed to get container status \"db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\": rpc error: code = NotFound desc = could not find container \"db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e\": container with ID starting with db08dd21321e0e49c2bcec934b9c4ca65e93ed3eff5d3d110b0137d37ebe255e not found: ID does not exist" Nov 25 12:00:19 crc kubenswrapper[4706]: I1125 12:00:19.414582 4706 scope.go:117] "RemoveContainer" containerID="fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69" Nov 25 12:00:19 crc kubenswrapper[4706]: E1125 12:00:19.414966 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\": container with ID starting with fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69 not found: ID does not exist" containerID="fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69" Nov 25 12:00:19 crc kubenswrapper[4706]: I1125 12:00:19.415000 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69"} err="failed to get container status \"fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\": rpc error: code = NotFound desc = could not find container \"fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69\": container with ID starting with fe85a38abd8df52ad0fbd3dd6b048b8c42390b6064d3601996727dadb3fcbe69 not found: ID does not exist" Nov 25 12:00:19 crc kubenswrapper[4706]: I1125 12:00:19.415026 4706 scope.go:117] "RemoveContainer" containerID="24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32" Nov 25 12:00:19 crc kubenswrapper[4706]: E1125 12:00:19.415401 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\": container with ID starting with 24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32 not found: ID does not exist" containerID="24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32" Nov 25 12:00:19 crc kubenswrapper[4706]: I1125 12:00:19.415442 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32"} err="failed to get container status \"24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\": rpc error: code = NotFound desc = could not find container \"24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32\": container with ID starting with 24c326f147def477e6dd794576cbdc9aed69f799cc18984f475496748b05eb32 not found: ID does not exist" Nov 25 12:00:19 crc kubenswrapper[4706]: I1125 12:00:19.415473 4706 scope.go:117] "RemoveContainer" containerID="c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27" Nov 25 12:00:19 crc kubenswrapper[4706]: E1125 12:00:19.415762 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\": container with ID starting with c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27 not found: ID does not exist" containerID="c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27" Nov 25 12:00:19 crc kubenswrapper[4706]: I1125 12:00:19.415780 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27"} err="failed to get container status \"c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\": rpc error: code = NotFound desc = could not find container \"c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27\": container with ID starting with c65af8b438f57256d8c22cb34f68922d628338e384ca97d694b0dbf2d41a5e27 not found: ID does not exist" Nov 25 12:00:19 crc kubenswrapper[4706]: I1125 12:00:19.415793 4706 scope.go:117] "RemoveContainer" containerID="86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b" Nov 25 12:00:19 crc kubenswrapper[4706]: E1125 12:00:19.416021 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\": container with ID starting with 86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b not found: ID does not exist" containerID="86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b" Nov 25 12:00:19 crc kubenswrapper[4706]: I1125 12:00:19.416042 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b"} err="failed to get container status \"86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\": rpc error: code = NotFound desc = could not find container \"86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b\": container with ID starting with 86001c3abc077d36ed1fa0c37bb6163896fb9cde28b58affd2f67fb8a024165b not found: ID does not exist" Nov 25 12:00:19 crc kubenswrapper[4706]: I1125 12:00:19.416054 4706 scope.go:117] "RemoveContainer" containerID="ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8" Nov 25 12:00:19 crc kubenswrapper[4706]: E1125 12:00:19.416315 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\": container with ID starting with ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8 not found: ID does not exist" containerID="ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8" Nov 25 12:00:19 crc kubenswrapper[4706]: I1125 12:00:19.416340 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8"} err="failed to get container status \"ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\": rpc error: code = NotFound desc = could not find container \"ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8\": container with ID starting with ea87e7399f4267877f5e967eb62bd500b646e88ee8ee20a71c3bf7c0941f02a8 not found: ID does not exist" Nov 25 12:00:19 crc kubenswrapper[4706]: I1125 12:00:19.932465 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Nov 25 12:00:21 crc kubenswrapper[4706]: I1125 12:00:21.364127 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55478c4467-777cf" Nov 25 12:00:21 crc kubenswrapper[4706]: I1125 12:00:21.365766 4706 status_manager.go:851] "Failed to get status for pod" podUID="c2b01a11-ff6e-4718-9622-3cba2728d492" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:21 crc kubenswrapper[4706]: I1125 12:00:21.366218 4706 status_manager.go:851] "Failed to get status for pod" podUID="3ab6dcdf-bba1-4c4c-aa91-47a06fd22366" pod="openstack/dnsmasq-dns-55478c4467-777cf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-55478c4467-777cf\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:21 crc kubenswrapper[4706]: I1125 12:00:21.366565 4706 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:21 crc kubenswrapper[4706]: I1125 12:00:21.940773 4706 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:21 crc kubenswrapper[4706]: I1125 12:00:21.942455 4706 status_manager.go:851] "Failed to get status for pod" podUID="c2b01a11-ff6e-4718-9622-3cba2728d492" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:21 crc kubenswrapper[4706]: I1125 12:00:21.943217 4706 status_manager.go:851] "Failed to get status for pod" podUID="3ab6dcdf-bba1-4c4c-aa91-47a06fd22366" pod="openstack/dnsmasq-dns-55478c4467-777cf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-55478c4467-777cf\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:22 crc kubenswrapper[4706]: E1125 12:00:22.467375 4706 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.13:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187b3e26e55e0751 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 12:00:15.823505233 +0000 UTC m=+1424.738062614,LastTimestamp:2025-11-25 12:00:15.823505233 +0000 UTC m=+1424.738062614,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 12:00:24 crc kubenswrapper[4706]: E1125 12:00:24.263562 4706 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:24 crc kubenswrapper[4706]: E1125 12:00:24.264185 4706 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:24 crc kubenswrapper[4706]: E1125 12:00:24.264588 4706 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:24 crc kubenswrapper[4706]: E1125 12:00:24.265105 4706 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:24 crc kubenswrapper[4706]: E1125 12:00:24.265625 4706 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:24 crc kubenswrapper[4706]: I1125 12:00:24.265652 4706 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Nov 25 12:00:24 crc kubenswrapper[4706]: E1125 12:00:24.265906 4706 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="200ms" Nov 25 12:00:24 crc kubenswrapper[4706]: E1125 12:00:24.466961 4706 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="400ms" Nov 25 12:00:24 crc kubenswrapper[4706]: E1125 12:00:24.868972 4706 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="800ms" Nov 25 12:00:25 crc kubenswrapper[4706]: E1125 12:00:25.670458 4706 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="1.6s" Nov 25 12:00:26 crc kubenswrapper[4706]: I1125 12:00:26.230405 4706 generic.go:334] "Generic (PLEG): container finished" podID="cdb2d830-fbc9-4336-83b7-0392051670cb" containerID="caeb4d66adfe0318a9d715726ff566dfee8083fce21ac6c0307644f0f428b707" exitCode=1 Nov 25 12:00:26 crc kubenswrapper[4706]: I1125 12:00:26.230586 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" event={"ID":"cdb2d830-fbc9-4336-83b7-0392051670cb","Type":"ContainerDied","Data":"caeb4d66adfe0318a9d715726ff566dfee8083fce21ac6c0307644f0f428b707"} Nov 25 12:00:26 crc kubenswrapper[4706]: I1125 12:00:26.231587 4706 scope.go:117] "RemoveContainer" containerID="caeb4d66adfe0318a9d715726ff566dfee8083fce21ac6c0307644f0f428b707" Nov 25 12:00:26 crc kubenswrapper[4706]: I1125 12:00:26.231734 4706 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:26 crc kubenswrapper[4706]: I1125 12:00:26.232190 4706 status_manager.go:851] "Failed to get status for pod" podUID="cdb2d830-fbc9-4336-83b7-0392051670cb" pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-7d76b4f6c7-xxkgj\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:26 crc kubenswrapper[4706]: I1125 12:00:26.232700 4706 status_manager.go:851] "Failed to get status for pod" podUID="c2b01a11-ff6e-4718-9622-3cba2728d492" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:26 crc kubenswrapper[4706]: I1125 12:00:26.233120 4706 status_manager.go:851] "Failed to get status for pod" podUID="3ab6dcdf-bba1-4c4c-aa91-47a06fd22366" pod="openstack/dnsmasq-dns-55478c4467-777cf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-55478c4467-777cf\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:27 crc kubenswrapper[4706]: I1125 12:00:27.244131 4706 generic.go:334] "Generic (PLEG): container finished" podID="cdb2d830-fbc9-4336-83b7-0392051670cb" containerID="ab70ce8aca25b2944e1164b6f8280f1185501f4e0e1177f60e946980080ac735" exitCode=1 Nov 25 12:00:27 crc kubenswrapper[4706]: I1125 12:00:27.244260 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" event={"ID":"cdb2d830-fbc9-4336-83b7-0392051670cb","Type":"ContainerDied","Data":"ab70ce8aca25b2944e1164b6f8280f1185501f4e0e1177f60e946980080ac735"} Nov 25 12:00:27 crc kubenswrapper[4706]: I1125 12:00:27.244693 4706 scope.go:117] "RemoveContainer" containerID="caeb4d66adfe0318a9d715726ff566dfee8083fce21ac6c0307644f0f428b707" Nov 25 12:00:27 crc kubenswrapper[4706]: I1125 12:00:27.245493 4706 scope.go:117] "RemoveContainer" containerID="ab70ce8aca25b2944e1164b6f8280f1185501f4e0e1177f60e946980080ac735" Nov 25 12:00:27 crc kubenswrapper[4706]: I1125 12:00:27.245521 4706 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:27 crc kubenswrapper[4706]: E1125 12:00:27.245956 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=metallb-operator-controller-manager-7d76b4f6c7-xxkgj_metallb-system(cdb2d830-fbc9-4336-83b7-0392051670cb)\"" pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" podUID="cdb2d830-fbc9-4336-83b7-0392051670cb" Nov 25 12:00:27 crc kubenswrapper[4706]: I1125 12:00:27.245939 4706 status_manager.go:851] "Failed to get status for pod" podUID="cdb2d830-fbc9-4336-83b7-0392051670cb" pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-7d76b4f6c7-xxkgj\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:27 crc kubenswrapper[4706]: I1125 12:00:27.246632 4706 status_manager.go:851] "Failed to get status for pod" podUID="c2b01a11-ff6e-4718-9622-3cba2728d492" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:27 crc kubenswrapper[4706]: I1125 12:00:27.247100 4706 status_manager.go:851] "Failed to get status for pod" podUID="3ab6dcdf-bba1-4c4c-aa91-47a06fd22366" pod="openstack/dnsmasq-dns-55478c4467-777cf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-55478c4467-777cf\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:27 crc kubenswrapper[4706]: E1125 12:00:27.272016 4706 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="3.2s" Nov 25 12:00:27 crc kubenswrapper[4706]: I1125 12:00:27.293669 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" Nov 25 12:00:27 crc kubenswrapper[4706]: I1125 12:00:27.850811 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="04e7a5d0-b5fe-4a58-b015-339cc1218c6e" containerName="kube-state-metrics" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 25 12:00:28 crc kubenswrapper[4706]: I1125 12:00:28.258234 4706 scope.go:117] "RemoveContainer" containerID="ab70ce8aca25b2944e1164b6f8280f1185501f4e0e1177f60e946980080ac735" Nov 25 12:00:28 crc kubenswrapper[4706]: E1125 12:00:28.258676 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=metallb-operator-controller-manager-7d76b4f6c7-xxkgj_metallb-system(cdb2d830-fbc9-4336-83b7-0392051670cb)\"" pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" podUID="cdb2d830-fbc9-4336-83b7-0392051670cb" Nov 25 12:00:28 crc kubenswrapper[4706]: I1125 12:00:28.259043 4706 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:28 crc kubenswrapper[4706]: I1125 12:00:28.259784 4706 status_manager.go:851] "Failed to get status for pod" podUID="cdb2d830-fbc9-4336-83b7-0392051670cb" pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-7d76b4f6c7-xxkgj\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:28 crc kubenswrapper[4706]: I1125 12:00:28.260363 4706 status_manager.go:851] "Failed to get status for pod" podUID="c2b01a11-ff6e-4718-9622-3cba2728d492" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:28 crc kubenswrapper[4706]: I1125 12:00:28.260814 4706 status_manager.go:851] "Failed to get status for pod" podUID="3ab6dcdf-bba1-4c4c-aa91-47a06fd22366" pod="openstack/dnsmasq-dns-55478c4467-777cf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-55478c4467-777cf\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:29 crc kubenswrapper[4706]: I1125 12:00:29.278095 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 25 12:00:29 crc kubenswrapper[4706]: I1125 12:00:29.278527 4706 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d" exitCode=1 Nov 25 12:00:29 crc kubenswrapper[4706]: I1125 12:00:29.278579 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d"} Nov 25 12:00:29 crc kubenswrapper[4706]: I1125 12:00:29.279608 4706 scope.go:117] "RemoveContainer" containerID="83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d" Nov 25 12:00:29 crc kubenswrapper[4706]: I1125 12:00:29.280008 4706 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:29 crc kubenswrapper[4706]: I1125 12:00:29.280644 4706 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:29 crc kubenswrapper[4706]: I1125 12:00:29.281549 4706 status_manager.go:851] "Failed to get status for pod" podUID="cdb2d830-fbc9-4336-83b7-0392051670cb" pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-7d76b4f6c7-xxkgj\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:29 crc kubenswrapper[4706]: I1125 12:00:29.283805 4706 status_manager.go:851] "Failed to get status for pod" podUID="c2b01a11-ff6e-4718-9622-3cba2728d492" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:29 crc kubenswrapper[4706]: I1125 12:00:29.284662 4706 status_manager.go:851] "Failed to get status for pod" podUID="3ab6dcdf-bba1-4c4c-aa91-47a06fd22366" pod="openstack/dnsmasq-dns-55478c4467-777cf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-55478c4467-777cf\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:29 crc kubenswrapper[4706]: I1125 12:00:29.921716 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:00:29 crc kubenswrapper[4706]: I1125 12:00:29.926537 4706 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:29 crc kubenswrapper[4706]: I1125 12:00:29.927433 4706 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:29 crc kubenswrapper[4706]: I1125 12:00:29.928229 4706 status_manager.go:851] "Failed to get status for pod" podUID="cdb2d830-fbc9-4336-83b7-0392051670cb" pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-7d76b4f6c7-xxkgj\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:29 crc kubenswrapper[4706]: I1125 12:00:29.928798 4706 status_manager.go:851] "Failed to get status for pod" podUID="c2b01a11-ff6e-4718-9622-3cba2728d492" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:29 crc kubenswrapper[4706]: I1125 12:00:29.928984 4706 status_manager.go:851] "Failed to get status for pod" podUID="3ab6dcdf-bba1-4c4c-aa91-47a06fd22366" pod="openstack/dnsmasq-dns-55478c4467-777cf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-55478c4467-777cf\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:29 crc kubenswrapper[4706]: I1125 12:00:29.945987 4706 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ce0e2e75-834b-46fb-bc84-229e60f904b1" Nov 25 12:00:29 crc kubenswrapper[4706]: I1125 12:00:29.946060 4706 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ce0e2e75-834b-46fb-bc84-229e60f904b1" Nov 25 12:00:29 crc kubenswrapper[4706]: E1125 12:00:29.946554 4706 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:00:29 crc kubenswrapper[4706]: I1125 12:00:29.947153 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:00:29 crc kubenswrapper[4706]: E1125 12:00:29.968139 4706 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.13:6443: connect: connection refused" pod="openshift-image-registry/image-registry-66df7c8f76-2csd2" volumeName="registry-storage" Nov 25 12:00:30 crc kubenswrapper[4706]: I1125 12:00:30.016349 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 12:00:30 crc kubenswrapper[4706]: I1125 12:00:30.290796 4706 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="607eb7a26498305a135402a5d05ed6bfbf6faf4e37afb7b842187b766cc6671e" exitCode=0 Nov 25 12:00:30 crc kubenswrapper[4706]: I1125 12:00:30.290869 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"607eb7a26498305a135402a5d05ed6bfbf6faf4e37afb7b842187b766cc6671e"} Nov 25 12:00:30 crc kubenswrapper[4706]: I1125 12:00:30.290896 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"387335b7e50d4f3ad1288e7b819d9fc1b6237af0f42343ba85ceffc0d24a0a6d"} Nov 25 12:00:30 crc kubenswrapper[4706]: I1125 12:00:30.291151 4706 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ce0e2e75-834b-46fb-bc84-229e60f904b1" Nov 25 12:00:30 crc kubenswrapper[4706]: I1125 12:00:30.291163 4706 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ce0e2e75-834b-46fb-bc84-229e60f904b1" Nov 25 12:00:30 crc kubenswrapper[4706]: E1125 12:00:30.291600 4706 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:00:30 crc kubenswrapper[4706]: I1125 12:00:30.291942 4706 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:30 crc kubenswrapper[4706]: I1125 12:00:30.292251 4706 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:30 crc kubenswrapper[4706]: I1125 12:00:30.292496 4706 status_manager.go:851] "Failed to get status for pod" podUID="c2b01a11-ff6e-4718-9622-3cba2728d492" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:30 crc kubenswrapper[4706]: I1125 12:00:30.292754 4706 status_manager.go:851] "Failed to get status for pod" podUID="cdb2d830-fbc9-4336-83b7-0392051670cb" pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-7d76b4f6c7-xxkgj\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:30 crc kubenswrapper[4706]: I1125 12:00:30.292969 4706 status_manager.go:851] "Failed to get status for pod" podUID="3ab6dcdf-bba1-4c4c-aa91-47a06fd22366" pod="openstack/dnsmasq-dns-55478c4467-777cf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-55478c4467-777cf\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:30 crc kubenswrapper[4706]: I1125 12:00:30.302161 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 25 12:00:30 crc kubenswrapper[4706]: I1125 12:00:30.303434 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"974036435db73d96e085515bc74bf3f1f8548952748a0b190afc75921a7da26d"} Nov 25 12:00:30 crc kubenswrapper[4706]: I1125 12:00:30.304479 4706 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:30 crc kubenswrapper[4706]: I1125 12:00:30.304967 4706 status_manager.go:851] "Failed to get status for pod" podUID="c2b01a11-ff6e-4718-9622-3cba2728d492" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:30 crc kubenswrapper[4706]: I1125 12:00:30.305269 4706 status_manager.go:851] "Failed to get status for pod" podUID="cdb2d830-fbc9-4336-83b7-0392051670cb" pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-7d76b4f6c7-xxkgj\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:30 crc kubenswrapper[4706]: I1125 12:00:30.306492 4706 status_manager.go:851] "Failed to get status for pod" podUID="3ab6dcdf-bba1-4c4c-aa91-47a06fd22366" pod="openstack/dnsmasq-dns-55478c4467-777cf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-55478c4467-777cf\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:30 crc kubenswrapper[4706]: I1125 12:00:30.306878 4706 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:30 crc kubenswrapper[4706]: I1125 12:00:30.307011 4706 generic.go:334] "Generic (PLEG): container finished" podID="a9a6207a-78de-492d-8c88-9a1d2a6f703d" containerID="405b0d15166403ea1ce5a749ae926d8356a8fac2e09af39d61b3432832a696ce" exitCode=0 Nov 25 12:00:30 crc kubenswrapper[4706]: I1125 12:00:30.307073 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a9a6207a-78de-492d-8c88-9a1d2a6f703d","Type":"ContainerDied","Data":"405b0d15166403ea1ce5a749ae926d8356a8fac2e09af39d61b3432832a696ce"} Nov 25 12:00:30 crc kubenswrapper[4706]: I1125 12:00:30.307750 4706 status_manager.go:851] "Failed to get status for pod" podUID="3ab6dcdf-bba1-4c4c-aa91-47a06fd22366" pod="openstack/dnsmasq-dns-55478c4467-777cf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-55478c4467-777cf\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:30 crc kubenswrapper[4706]: I1125 12:00:30.307963 4706 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:30 crc kubenswrapper[4706]: I1125 12:00:30.308184 4706 status_manager.go:851] "Failed to get status for pod" podUID="a9a6207a-78de-492d-8c88-9a1d2a6f703d" pod="openstack/rabbitmq-server-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/rabbitmq-server-0\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:30 crc kubenswrapper[4706]: I1125 12:00:30.308763 4706 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:30 crc kubenswrapper[4706]: I1125 12:00:30.309329 4706 status_manager.go:851] "Failed to get status for pod" podUID="cdb2d830-fbc9-4336-83b7-0392051670cb" pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-7d76b4f6c7-xxkgj\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:30 crc kubenswrapper[4706]: I1125 12:00:30.309542 4706 status_manager.go:851] "Failed to get status for pod" podUID="c2b01a11-ff6e-4718-9622-3cba2728d492" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Nov 25 12:00:30 crc kubenswrapper[4706]: E1125 12:00:30.375840 4706 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/persistence-rabbitmq-server-0: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/persistence-rabbitmq-server-0\": dial tcp 38.102.83.13:6443: connect: connection refused" pod="openstack/rabbitmq-server-0" volumeName="persistence" Nov 25 12:00:30 crc kubenswrapper[4706]: E1125 12:00:30.473012 4706 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="6.4s" Nov 25 12:00:31 crc kubenswrapper[4706]: I1125 12:00:31.320585 4706 generic.go:334] "Generic (PLEG): container finished" podID="6ea2e87f-dc81-49cc-81a8-e08a8ed11f12" containerID="9d199e7b84675fe385047dec9097ed09b0ada23ee15c70d716efce250b562877" exitCode=0 Nov 25 12:00:31 crc kubenswrapper[4706]: I1125 12:00:31.321968 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12","Type":"ContainerDied","Data":"9d199e7b84675fe385047dec9097ed09b0ada23ee15c70d716efce250b562877"} Nov 25 12:00:31 crc kubenswrapper[4706]: I1125 12:00:31.326417 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"99459e9fd121d318c09fe37a5fa20e3c36970c527d2890ba36852fa350cfe8d8"} Nov 25 12:00:31 crc kubenswrapper[4706]: I1125 12:00:31.326464 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9107e89bcb2c42ceb8e2fee10a27a7cdb0aab990b5e706c432fa10f93141b73c"} Nov 25 12:00:31 crc kubenswrapper[4706]: I1125 12:00:31.326479 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"102ce06989bb0e8115c18d5b60586efc953cda13a94d8ed9515c63fb8f977a7a"} Nov 25 12:00:31 crc kubenswrapper[4706]: I1125 12:00:31.333618 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a9a6207a-78de-492d-8c88-9a1d2a6f703d","Type":"ContainerStarted","Data":"11a201bdd5da84925bd682b80b5a3e25a73cc17b7aaf1a3319fdb018bc0ed560"} Nov 25 12:00:31 crc kubenswrapper[4706]: I1125 12:00:31.333834 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 25 12:00:32 crc kubenswrapper[4706]: I1125 12:00:32.350443 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6ea2e87f-dc81-49cc-81a8-e08a8ed11f12","Type":"ContainerStarted","Data":"98f434a7d805300ced8646ea17ae8c5abb50612fff58ee643caaf62bcf3bb4de"} Nov 25 12:00:32 crc kubenswrapper[4706]: I1125 12:00:32.350991 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:00:32 crc kubenswrapper[4706]: I1125 12:00:32.354248 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c69bcfd2fe6ca881f9aadacf941e0f5b5763f7019b5cd773da716d746a6530d0"} Nov 25 12:00:32 crc kubenswrapper[4706]: I1125 12:00:32.354313 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b289c173b1a0374abe37845a02c0e60d76333fc1c02ce7544b857412da1a525a"} Nov 25 12:00:32 crc kubenswrapper[4706]: I1125 12:00:32.354716 4706 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ce0e2e75-834b-46fb-bc84-229e60f904b1" Nov 25 12:00:32 crc kubenswrapper[4706]: I1125 12:00:32.354751 4706 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ce0e2e75-834b-46fb-bc84-229e60f904b1" Nov 25 12:00:33 crc kubenswrapper[4706]: I1125 12:00:33.770844 4706 scope.go:117] "RemoveContainer" containerID="e103b920c3e3166a3cec4818cbdc4804339d57762b5c16546942f4fc4d6c3c61" Nov 25 12:00:34 crc kubenswrapper[4706]: I1125 12:00:34.947278 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:00:34 crc kubenswrapper[4706]: I1125 12:00:34.947615 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:00:34 crc kubenswrapper[4706]: I1125 12:00:34.955385 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.364184 4706 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.400210 4706 generic.go:334] "Generic (PLEG): container finished" podID="063b2f44-faa1-4a58-b77b-f2140f569b01" containerID="49818e0aa017978b9575f26dea8f4372beabc3340d17d74cd665f3be1e9757ce" exitCode=1 Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.400258 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-2tmzq" event={"ID":"063b2f44-faa1-4a58-b77b-f2140f569b01","Type":"ContainerDied","Data":"49818e0aa017978b9575f26dea8f4372beabc3340d17d74cd665f3be1e9757ce"} Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.400933 4706 scope.go:117] "RemoveContainer" containerID="49818e0aa017978b9575f26dea8f4372beabc3340d17d74cd665f3be1e9757ce" Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.402700 4706 generic.go:334] "Generic (PLEG): container finished" podID="6c41fff9-feeb-4311-a7ce-7da3a71b3e9c" containerID="68996614537b4d8b8f9cf530cc12d048f8db2259bff6001bebd61362965c380d" exitCode=1 Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.402780 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-nf6gr" event={"ID":"6c41fff9-feeb-4311-a7ce-7da3a71b3e9c","Type":"ContainerDied","Data":"68996614537b4d8b8f9cf530cc12d048f8db2259bff6001bebd61362965c380d"} Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.403612 4706 scope.go:117] "RemoveContainer" containerID="68996614537b4d8b8f9cf530cc12d048f8db2259bff6001bebd61362965c380d" Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.405190 4706 generic.go:334] "Generic (PLEG): container finished" podID="1c035858-a349-4415-8a5d-f3f2edb7c84e" containerID="b5668e24c52cbb8f3ecf02f7fbbebb42713a3ff64e9d059836e36053d49db4a1" exitCode=1 Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.405271 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-f47gl" event={"ID":"1c035858-a349-4415-8a5d-f3f2edb7c84e","Type":"ContainerDied","Data":"b5668e24c52cbb8f3ecf02f7fbbebb42713a3ff64e9d059836e36053d49db4a1"} Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.406722 4706 scope.go:117] "RemoveContainer" containerID="b5668e24c52cbb8f3ecf02f7fbbebb42713a3ff64e9d059836e36053d49db4a1" Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.407226 4706 generic.go:334] "Generic (PLEG): container finished" podID="d256078e-afd5-4218-ad5c-d5211eb846a8" containerID="f598571c9af3c528456b4d48c688d467bb4a6bd6f39e79cfac7762152ff566a9" exitCode=1 Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.407263 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-8rlr7" event={"ID":"d256078e-afd5-4218-ad5c-d5211eb846a8","Type":"ContainerDied","Data":"f598571c9af3c528456b4d48c688d467bb4a6bd6f39e79cfac7762152ff566a9"} Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.408061 4706 scope.go:117] "RemoveContainer" containerID="f598571c9af3c528456b4d48c688d467bb4a6bd6f39e79cfac7762152ff566a9" Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.409753 4706 generic.go:334] "Generic (PLEG): container finished" podID="e318ee27-6b61-4c03-b697-782b25461b09" containerID="3ff4e5f3eae0eb946dff910e13d82ce4a133911ccc1ff40a91d57e525b023640" exitCode=1 Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.409801 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk" event={"ID":"e318ee27-6b61-4c03-b697-782b25461b09","Type":"ContainerDied","Data":"3ff4e5f3eae0eb946dff910e13d82ce4a133911ccc1ff40a91d57e525b023640"} Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.410475 4706 scope.go:117] "RemoveContainer" containerID="3ff4e5f3eae0eb946dff910e13d82ce4a133911ccc1ff40a91d57e525b023640" Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.412193 4706 generic.go:334] "Generic (PLEG): container finished" podID="6b8e15c0-a70f-4b4c-8836-a2c4e7b23f60" containerID="727cae160d2cb4b5f6c7224c124e4155d9df0a57e91d16999aed01ca19639ca4" exitCode=1 Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.412252 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-9s7hm" event={"ID":"6b8e15c0-a70f-4b4c-8836-a2c4e7b23f60","Type":"ContainerDied","Data":"727cae160d2cb4b5f6c7224c124e4155d9df0a57e91d16999aed01ca19639ca4"} Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.412694 4706 scope.go:117] "RemoveContainer" containerID="727cae160d2cb4b5f6c7224c124e4155d9df0a57e91d16999aed01ca19639ca4" Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.414095 4706 generic.go:334] "Generic (PLEG): container finished" podID="eab1279c-c99a-450e-887b-d246a2ff01aa" containerID="6e494fc4eee18671df20af8ca16e5f73ab527d03d690991a98dcaed58360434d" exitCode=1 Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.414160 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k7crl" event={"ID":"eab1279c-c99a-450e-887b-d246a2ff01aa","Type":"ContainerDied","Data":"6e494fc4eee18671df20af8ca16e5f73ab527d03d690991a98dcaed58360434d"} Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.414592 4706 scope.go:117] "RemoveContainer" containerID="6e494fc4eee18671df20af8ca16e5f73ab527d03d690991a98dcaed58360434d" Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.419251 4706 generic.go:334] "Generic (PLEG): container finished" podID="ee655c82-6748-4bba-9da4-dcf73e0cff37" containerID="312041d5294c4c4b83b3c55de78ab9601ca611ae7d1a7c6a837f2c832f489f4d" exitCode=1 Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.419312 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-4bsmv" event={"ID":"ee655c82-6748-4bba-9da4-dcf73e0cff37","Type":"ContainerDied","Data":"312041d5294c4c4b83b3c55de78ab9601ca611ae7d1a7c6a837f2c832f489f4d"} Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.419667 4706 scope.go:117] "RemoveContainer" containerID="312041d5294c4c4b83b3c55de78ab9601ca611ae7d1a7c6a837f2c832f489f4d" Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.422419 4706 generic.go:334] "Generic (PLEG): container finished" podID="a7a52f28-6bc4-481d-8513-16dbb7b37ae1" containerID="77644a2d6098f260cde2c4b6551e02ad0c9a9044dcbb8ac87b2c7404dbfc82b3" exitCode=1 Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.422487 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-8p5t2" event={"ID":"a7a52f28-6bc4-481d-8513-16dbb7b37ae1","Type":"ContainerDied","Data":"77644a2d6098f260cde2c4b6551e02ad0c9a9044dcbb8ac87b2c7404dbfc82b3"} Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.422751 4706 scope.go:117] "RemoveContainer" containerID="77644a2d6098f260cde2c4b6551e02ad0c9a9044dcbb8ac87b2c7404dbfc82b3" Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.430405 4706 generic.go:334] "Generic (PLEG): container finished" podID="9e5a3424-dd89-4411-872f-70447506cf73" containerID="3202771902bb36a6847af0f308ec82e7314352f70b8b6e811ceb53ce40e0f466" exitCode=1 Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.430903 4706 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ce0e2e75-834b-46fb-bc84-229e60f904b1" Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.431013 4706 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ce0e2e75-834b-46fb-bc84-229e60f904b1" Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.431290 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-l4m6r" event={"ID":"9e5a3424-dd89-4411-872f-70447506cf73","Type":"ContainerDied","Data":"3202771902bb36a6847af0f308ec82e7314352f70b8b6e811ceb53ce40e0f466"} Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.431762 4706 scope.go:117] "RemoveContainer" containerID="3202771902bb36a6847af0f308ec82e7314352f70b8b6e811ceb53ce40e0f466" Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.432524 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.472152 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.486895 4706 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="94660c75-e7e7-468b-b52c-5a097a781232" Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.851563 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="04e7a5d0-b5fe-4a58-b015-339cc1218c6e" containerName="kube-state-metrics" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.851846 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/kube-state-metrics-0" Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.852513 4706 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-state-metrics" containerStatusID={"Type":"cri-o","ID":"a43b93079f480147c92a5dbde6cde7fc167fb5a7be0101bce13f968d8af9b936"} pod="openstack/kube-state-metrics-0" containerMessage="Container kube-state-metrics failed liveness probe, will be restarted" Nov 25 12:00:37 crc kubenswrapper[4706]: I1125 12:00:37.852545 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="04e7a5d0-b5fe-4a58-b015-339cc1218c6e" containerName="kube-state-metrics" containerID="cri-o://a43b93079f480147c92a5dbde6cde7fc167fb5a7be0101bce13f968d8af9b936" gracePeriod=30 Nov 25 12:00:38 crc kubenswrapper[4706]: I1125 12:00:38.441735 4706 generic.go:334] "Generic (PLEG): container finished" podID="5726a389-32eb-4f0c-938b-6f2ddbb762e7" containerID="a4f76f11e3a12d3ed74cd38d05e887277ad85a13b0e7f5c7c2a40389bbde69f2" exitCode=1 Nov 25 12:00:38 crc kubenswrapper[4706]: I1125 12:00:38.441796 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-x9x4q" event={"ID":"5726a389-32eb-4f0c-938b-6f2ddbb762e7","Type":"ContainerDied","Data":"a4f76f11e3a12d3ed74cd38d05e887277ad85a13b0e7f5c7c2a40389bbde69f2"} Nov 25 12:00:38 crc kubenswrapper[4706]: I1125 12:00:38.442682 4706 scope.go:117] "RemoveContainer" containerID="a4f76f11e3a12d3ed74cd38d05e887277ad85a13b0e7f5c7c2a40389bbde69f2" Nov 25 12:00:38 crc kubenswrapper[4706]: I1125 12:00:38.448276 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-nf6gr" event={"ID":"6c41fff9-feeb-4311-a7ce-7da3a71b3e9c","Type":"ContainerStarted","Data":"8bcc6c66d2003de20e3894ed5e4c0c7fa24621413e086dd790686ba63d835134"} Nov 25 12:00:38 crc kubenswrapper[4706]: I1125 12:00:38.471934 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-l4m6r" event={"ID":"9e5a3424-dd89-4411-872f-70447506cf73","Type":"ContainerStarted","Data":"18f0cfcff6c07f2ca4cccd7935e7fdd089c5403b99b18d48a7835dbcfb895cec"} Nov 25 12:00:38 crc kubenswrapper[4706]: I1125 12:00:38.473224 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-l4m6r" Nov 25 12:00:38 crc kubenswrapper[4706]: I1125 12:00:38.482677 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-8rlr7" event={"ID":"d256078e-afd5-4218-ad5c-d5211eb846a8","Type":"ContainerStarted","Data":"209957f565192be98f1e8b4bdd7d1ccfe807f772622ec1f9ea9f192faaa0eec4"} Nov 25 12:00:38 crc kubenswrapper[4706]: I1125 12:00:38.483711 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cb74df96-8rlr7" Nov 25 12:00:38 crc kubenswrapper[4706]: I1125 12:00:38.494888 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-2tmzq" event={"ID":"063b2f44-faa1-4a58-b77b-f2140f569b01","Type":"ContainerStarted","Data":"382e456c6fbd763bd7078807a9f97276eee0e98a5f9e81429cf721d7d43cbf64"} Nov 25 12:00:38 crc kubenswrapper[4706]: I1125 12:00:38.495554 4706 scope.go:117] "RemoveContainer" containerID="382e456c6fbd763bd7078807a9f97276eee0e98a5f9e81429cf721d7d43cbf64" Nov 25 12:00:38 crc kubenswrapper[4706]: E1125 12:00:38.495846 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=octavia-operator-controller-manager-fd75fd47d-2tmzq_openstack-operators(063b2f44-faa1-4a58-b77b-f2140f569b01)\"" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-2tmzq" podUID="063b2f44-faa1-4a58-b77b-f2140f569b01" Nov 25 12:00:38 crc kubenswrapper[4706]: I1125 12:00:38.510419 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-9s7hm" event={"ID":"6b8e15c0-a70f-4b4c-8836-a2c4e7b23f60","Type":"ContainerStarted","Data":"98356e4566939db6aa79c8b5c2952865d0a73175246366956905475dff958f76"} Nov 25 12:00:38 crc kubenswrapper[4706]: I1125 12:00:38.511027 4706 scope.go:117] "RemoveContainer" containerID="98356e4566939db6aa79c8b5c2952865d0a73175246366956905475dff958f76" Nov 25 12:00:38 crc kubenswrapper[4706]: E1125 12:00:38.511256 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=watcher-operator-controller-manager-864885998-9s7hm_openstack-operators(6b8e15c0-a70f-4b4c-8836-a2c4e7b23f60)\"" pod="openstack-operators/watcher-operator-controller-manager-864885998-9s7hm" podUID="6b8e15c0-a70f-4b4c-8836-a2c4e7b23f60" Nov 25 12:00:38 crc kubenswrapper[4706]: I1125 12:00:38.521737 4706 generic.go:334] "Generic (PLEG): container finished" podID="04e7a5d0-b5fe-4a58-b015-339cc1218c6e" containerID="a43b93079f480147c92a5dbde6cde7fc167fb5a7be0101bce13f968d8af9b936" exitCode=2 Nov 25 12:00:38 crc kubenswrapper[4706]: I1125 12:00:38.521817 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"04e7a5d0-b5fe-4a58-b015-339cc1218c6e","Type":"ContainerDied","Data":"a43b93079f480147c92a5dbde6cde7fc167fb5a7be0101bce13f968d8af9b936"} Nov 25 12:00:38 crc kubenswrapper[4706]: I1125 12:00:38.522050 4706 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ce0e2e75-834b-46fb-bc84-229e60f904b1" Nov 25 12:00:38 crc kubenswrapper[4706]: I1125 12:00:38.522065 4706 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ce0e2e75-834b-46fb-bc84-229e60f904b1" Nov 25 12:00:38 crc kubenswrapper[4706]: I1125 12:00:38.564351 4706 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="94660c75-e7e7-468b-b52c-5a097a781232" Nov 25 12:00:38 crc kubenswrapper[4706]: I1125 12:00:38.747147 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk" Nov 25 12:00:38 crc kubenswrapper[4706]: I1125 12:00:38.747215 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk" Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.117341 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.117637 4706 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.117754 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.391703 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-9cb9fb586-5854z" podUID="2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/readyz\": dial tcp 10.217.0.90:8081: connect: connection refused" Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.391767 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-9cb9fb586-5854z" podUID="2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/healthz\": dial tcp 10.217.0.90:8081: connect: connection refused" Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.535460 4706 generic.go:334] "Generic (PLEG): container finished" podID="3c582966-ab32-499d-8f1c-95c942dd6bb4" containerID="1c4344b8b04c4ceec82bad456d74fd47040eef6a9f76f1d60a95a4a90b0fdad9" exitCode=1 Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.535581 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tfn29" event={"ID":"3c582966-ab32-499d-8f1c-95c942dd6bb4","Type":"ContainerDied","Data":"1c4344b8b04c4ceec82bad456d74fd47040eef6a9f76f1d60a95a4a90b0fdad9"} Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.536504 4706 scope.go:117] "RemoveContainer" containerID="1c4344b8b04c4ceec82bad456d74fd47040eef6a9f76f1d60a95a4a90b0fdad9" Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.538849 4706 generic.go:334] "Generic (PLEG): container finished" podID="eab1279c-c99a-450e-887b-d246a2ff01aa" containerID="5aa2d062bee571f40f50fb1d425672051c914cfcd57df5100254b6b32c8ee09c" exitCode=1 Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.538934 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k7crl" event={"ID":"eab1279c-c99a-450e-887b-d246a2ff01aa","Type":"ContainerDied","Data":"5aa2d062bee571f40f50fb1d425672051c914cfcd57df5100254b6b32c8ee09c"} Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.539004 4706 scope.go:117] "RemoveContainer" containerID="6e494fc4eee18671df20af8ca16e5f73ab527d03d690991a98dcaed58360434d" Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.539397 4706 scope.go:117] "RemoveContainer" containerID="5aa2d062bee571f40f50fb1d425672051c914cfcd57df5100254b6b32c8ee09c" Nov 25 12:00:39 crc kubenswrapper[4706]: E1125 12:00:39.539618 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=placement-operator-controller-manager-5db546f9d9-k7crl_openstack-operators(eab1279c-c99a-450e-887b-d246a2ff01aa)\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k7crl" podUID="eab1279c-c99a-450e-887b-d246a2ff01aa" Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.541771 4706 generic.go:334] "Generic (PLEG): container finished" podID="4857e509-acac-422c-87e8-2662708da599" containerID="a3fab4850794bd28ca3ba88d877ddf98f3e4822e0f4620b74501334d09426807" exitCode=1 Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.541910 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-t6c78" event={"ID":"4857e509-acac-422c-87e8-2662708da599","Type":"ContainerDied","Data":"a3fab4850794bd28ca3ba88d877ddf98f3e4822e0f4620b74501334d09426807"} Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.542455 4706 scope.go:117] "RemoveContainer" containerID="a3fab4850794bd28ca3ba88d877ddf98f3e4822e0f4620b74501334d09426807" Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.545759 4706 generic.go:334] "Generic (PLEG): container finished" podID="9e5a3424-dd89-4411-872f-70447506cf73" containerID="18f0cfcff6c07f2ca4cccd7935e7fdd089c5403b99b18d48a7835dbcfb895cec" exitCode=1 Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.545922 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-l4m6r" event={"ID":"9e5a3424-dd89-4411-872f-70447506cf73","Type":"ContainerDied","Data":"18f0cfcff6c07f2ca4cccd7935e7fdd089c5403b99b18d48a7835dbcfb895cec"} Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.546783 4706 scope.go:117] "RemoveContainer" containerID="18f0cfcff6c07f2ca4cccd7935e7fdd089c5403b99b18d48a7835dbcfb895cec" Nov 25 12:00:39 crc kubenswrapper[4706]: E1125 12:00:39.547334 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ironic-operator-controller-manager-5bfcdc958c-l4m6r_openstack-operators(9e5a3424-dd89-4411-872f-70447506cf73)\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-l4m6r" podUID="9e5a3424-dd89-4411-872f-70447506cf73" Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.551610 4706 generic.go:334] "Generic (PLEG): container finished" podID="2df5f121-0564-4647-acf6-d09283ff5a94" containerID="e44190ab5cbcff354325f815ddbbd371958307bff570571987c74582e97363b1" exitCode=1 Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.551716 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5789f9b844-cfvkd" event={"ID":"2df5f121-0564-4647-acf6-d09283ff5a94","Type":"ContainerDied","Data":"e44190ab5cbcff354325f815ddbbd371958307bff570571987c74582e97363b1"} Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.552404 4706 scope.go:117] "RemoveContainer" containerID="e44190ab5cbcff354325f815ddbbd371958307bff570571987c74582e97363b1" Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.555119 4706 generic.go:334] "Generic (PLEG): container finished" podID="9fa65252-7bf5-4e83-beb7-dfcfa63db10d" containerID="126cca5a246b8e52e5ac0d4a31f6fa218a7942f9dad0193ce336826b864a793e" exitCode=1 Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.555170 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hqsp5" event={"ID":"9fa65252-7bf5-4e83-beb7-dfcfa63db10d","Type":"ContainerDied","Data":"126cca5a246b8e52e5ac0d4a31f6fa218a7942f9dad0193ce336826b864a793e"} Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.555711 4706 scope.go:117] "RemoveContainer" containerID="126cca5a246b8e52e5ac0d4a31f6fa218a7942f9dad0193ce336826b864a793e" Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.559382 4706 generic.go:334] "Generic (PLEG): container finished" podID="063b2f44-faa1-4a58-b77b-f2140f569b01" containerID="382e456c6fbd763bd7078807a9f97276eee0e98a5f9e81429cf721d7d43cbf64" exitCode=1 Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.559448 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-2tmzq" event={"ID":"063b2f44-faa1-4a58-b77b-f2140f569b01","Type":"ContainerDied","Data":"382e456c6fbd763bd7078807a9f97276eee0e98a5f9e81429cf721d7d43cbf64"} Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.560216 4706 scope.go:117] "RemoveContainer" containerID="382e456c6fbd763bd7078807a9f97276eee0e98a5f9e81429cf721d7d43cbf64" Nov 25 12:00:39 crc kubenswrapper[4706]: E1125 12:00:39.560548 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=octavia-operator-controller-manager-fd75fd47d-2tmzq_openstack-operators(063b2f44-faa1-4a58-b77b-f2140f569b01)\"" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-2tmzq" podUID="063b2f44-faa1-4a58-b77b-f2140f569b01" Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.566762 4706 generic.go:334] "Generic (PLEG): container finished" podID="04e7a5d0-b5fe-4a58-b015-339cc1218c6e" containerID="79f9d89704437bc055794a07cdf075a5908c012c2571fa608b8963523579851c" exitCode=1 Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.566885 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"04e7a5d0-b5fe-4a58-b015-339cc1218c6e","Type":"ContainerDied","Data":"79f9d89704437bc055794a07cdf075a5908c012c2571fa608b8963523579851c"} Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.567431 4706 scope.go:117] "RemoveContainer" containerID="79f9d89704437bc055794a07cdf075a5908c012c2571fa608b8963523579851c" Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.575650 4706 generic.go:334] "Generic (PLEG): container finished" podID="2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1" containerID="1e58195af2efe7fbff79413b9c95bbeec15ed12b8f39f76667ab5de3c4ffdf54" exitCode=1 Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.575789 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-9cb9fb586-5854z" event={"ID":"2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1","Type":"ContainerDied","Data":"1e58195af2efe7fbff79413b9c95bbeec15ed12b8f39f76667ab5de3c4ffdf54"} Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.576977 4706 scope.go:117] "RemoveContainer" containerID="1e58195af2efe7fbff79413b9c95bbeec15ed12b8f39f76667ab5de3c4ffdf54" Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.604961 4706 generic.go:334] "Generic (PLEG): container finished" podID="23155e14-a775-48c5-adf9-55dcfd008040" containerID="c2c4e1bb27ca7d9c5c5b1c7f8f4ed76c65b60e421b1f9b74443af46355e7dbac" exitCode=1 Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.605018 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-jh5hc" event={"ID":"23155e14-a775-48c5-adf9-55dcfd008040","Type":"ContainerDied","Data":"c2c4e1bb27ca7d9c5c5b1c7f8f4ed76c65b60e421b1f9b74443af46355e7dbac"} Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.605645 4706 scope.go:117] "RemoveContainer" containerID="c2c4e1bb27ca7d9c5c5b1c7f8f4ed76c65b60e421b1f9b74443af46355e7dbac" Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.616044 4706 generic.go:334] "Generic (PLEG): container finished" podID="1c035858-a349-4415-8a5d-f3f2edb7c84e" containerID="bbcbd5e92b3c8020116399644b123c6a0ecf44834665b167b35151fb974c3f10" exitCode=1 Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.616125 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-f47gl" event={"ID":"1c035858-a349-4415-8a5d-f3f2edb7c84e","Type":"ContainerDied","Data":"bbcbd5e92b3c8020116399644b123c6a0ecf44834665b167b35151fb974c3f10"} Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.616835 4706 scope.go:117] "RemoveContainer" containerID="bbcbd5e92b3c8020116399644b123c6a0ecf44834665b167b35151fb974c3f10" Nov 25 12:00:39 crc kubenswrapper[4706]: E1125 12:00:39.617112 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=nova-operator-controller-manager-79556f57fc-f47gl_openstack-operators(1c035858-a349-4415-8a5d-f3f2edb7c84e)\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-f47gl" podUID="1c035858-a349-4415-8a5d-f3f2edb7c84e" Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.638877 4706 generic.go:334] "Generic (PLEG): container finished" podID="e204aa88-c108-491e-9a73-2fca5c2ef15c" containerID="827f838f0fc8d981651efe078b754d226e3c5f8443dbf18eec0c9b627c35c189" exitCode=1 Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.639010 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-rfz7f" event={"ID":"e204aa88-c108-491e-9a73-2fca5c2ef15c","Type":"ContainerDied","Data":"827f838f0fc8d981651efe078b754d226e3c5f8443dbf18eec0c9b627c35c189"} Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.639883 4706 scope.go:117] "RemoveContainer" containerID="827f838f0fc8d981651efe078b754d226e3c5f8443dbf18eec0c9b627c35c189" Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.642361 4706 scope.go:117] "RemoveContainer" containerID="3202771902bb36a6847af0f308ec82e7314352f70b8b6e811ceb53ce40e0f466" Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.645182 4706 generic.go:334] "Generic (PLEG): container finished" podID="6b8e15c0-a70f-4b4c-8836-a2c4e7b23f60" containerID="98356e4566939db6aa79c8b5c2952865d0a73175246366956905475dff958f76" exitCode=1 Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.645257 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-9s7hm" event={"ID":"6b8e15c0-a70f-4b4c-8836-a2c4e7b23f60","Type":"ContainerDied","Data":"98356e4566939db6aa79c8b5c2952865d0a73175246366956905475dff958f76"} Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.646061 4706 scope.go:117] "RemoveContainer" containerID="98356e4566939db6aa79c8b5c2952865d0a73175246366956905475dff958f76" Nov 25 12:00:39 crc kubenswrapper[4706]: E1125 12:00:39.646427 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=watcher-operator-controller-manager-864885998-9s7hm_openstack-operators(6b8e15c0-a70f-4b4c-8836-a2c4e7b23f60)\"" pod="openstack-operators/watcher-operator-controller-manager-864885998-9s7hm" podUID="6b8e15c0-a70f-4b4c-8836-a2c4e7b23f60" Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.652946 4706 generic.go:334] "Generic (PLEG): container finished" podID="c6de3b19-c207-4c00-8350-de810fb1f555" containerID="d1cebeba280b3a9494646903e1229d60dc042d5fd7291dc89497ceb5c203f034" exitCode=1 Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.653012 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-9bz4f" event={"ID":"c6de3b19-c207-4c00-8350-de810fb1f555","Type":"ContainerDied","Data":"d1cebeba280b3a9494646903e1229d60dc042d5fd7291dc89497ceb5c203f034"} Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.653807 4706 scope.go:117] "RemoveContainer" containerID="d1cebeba280b3a9494646903e1229d60dc042d5fd7291dc89497ceb5c203f034" Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.656076 4706 generic.go:334] "Generic (PLEG): container finished" podID="72bbe536-121d-47c0-b473-2974b238f271" containerID="b546f8f61c11277a9e3ec051e9d83bfbe0186407b7fd51031bc317fe61e2643b" exitCode=1 Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.656143 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zx4v6" event={"ID":"72bbe536-121d-47c0-b473-2974b238f271","Type":"ContainerDied","Data":"b546f8f61c11277a9e3ec051e9d83bfbe0186407b7fd51031bc317fe61e2643b"} Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.656699 4706 scope.go:117] "RemoveContainer" containerID="b546f8f61c11277a9e3ec051e9d83bfbe0186407b7fd51031bc317fe61e2643b" Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.659905 4706 generic.go:334] "Generic (PLEG): container finished" podID="a7a52f28-6bc4-481d-8513-16dbb7b37ae1" containerID="2f9e63b9b2b55d5cbd2f8d076fdd74fe65c68c1401d54183033d597c9e0ca237" exitCode=1 Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.659977 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-8p5t2" event={"ID":"a7a52f28-6bc4-481d-8513-16dbb7b37ae1","Type":"ContainerDied","Data":"2f9e63b9b2b55d5cbd2f8d076fdd74fe65c68c1401d54183033d597c9e0ca237"} Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.660627 4706 scope.go:117] "RemoveContainer" containerID="2f9e63b9b2b55d5cbd2f8d076fdd74fe65c68c1401d54183033d597c9e0ca237" Nov 25 12:00:39 crc kubenswrapper[4706]: E1125 12:00:39.660905 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=telemetry-operator-controller-manager-567f98c9d-8p5t2_openstack-operators(a7a52f28-6bc4-481d-8513-16dbb7b37ae1)\"" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-8p5t2" podUID="a7a52f28-6bc4-481d-8513-16dbb7b37ae1" Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.664971 4706 generic.go:334] "Generic (PLEG): container finished" podID="61b1ec50-3228-43bc-bb09-d74a7f02be52" containerID="fb68eae3767f5e42de2dc8e408ae9722d3ce773a6ebbed0bfcd8c3393c4e1608" exitCode=1 Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.665050 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-nc6f7" event={"ID":"61b1ec50-3228-43bc-bb09-d74a7f02be52","Type":"ContainerDied","Data":"fb68eae3767f5e42de2dc8e408ae9722d3ce773a6ebbed0bfcd8c3393c4e1608"} Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.666230 4706 scope.go:117] "RemoveContainer" containerID="fb68eae3767f5e42de2dc8e408ae9722d3ce773a6ebbed0bfcd8c3393c4e1608" Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.672896 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk" event={"ID":"e318ee27-6b61-4c03-b697-782b25461b09","Type":"ContainerStarted","Data":"32ede108855b2484424491bbebf700c8830ece7ed9e24fab0086d9f3b9114cf3"} Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.673914 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk" Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.676893 4706 generic.go:334] "Generic (PLEG): container finished" podID="a0668604-b184-4265-b9af-fc6f526d8351" containerID="bcd613173c6ad5d898feaae3fdc682a81d560c9a5c1a5577993fb3dd790cd961" exitCode=1 Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.676957 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rwbvj" event={"ID":"a0668604-b184-4265-b9af-fc6f526d8351","Type":"ContainerDied","Data":"bcd613173c6ad5d898feaae3fdc682a81d560c9a5c1a5577993fb3dd790cd961"} Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.677653 4706 scope.go:117] "RemoveContainer" containerID="bcd613173c6ad5d898feaae3fdc682a81d560c9a5c1a5577993fb3dd790cd961" Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.682438 4706 generic.go:334] "Generic (PLEG): container finished" podID="ee655c82-6748-4bba-9da4-dcf73e0cff37" containerID="e47b18a47a3c07e2621e6d16d464c800ca4775ecfde041d46f44c4816bbb48a8" exitCode=1 Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.682498 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-4bsmv" event={"ID":"ee655c82-6748-4bba-9da4-dcf73e0cff37","Type":"ContainerDied","Data":"e47b18a47a3c07e2621e6d16d464c800ca4775ecfde041d46f44c4816bbb48a8"} Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.682958 4706 scope.go:117] "RemoveContainer" containerID="e47b18a47a3c07e2621e6d16d464c800ca4775ecfde041d46f44c4816bbb48a8" Nov 25 12:00:39 crc kubenswrapper[4706]: E1125 12:00:39.683172 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=cinder-operator-controller-manager-79856dc55c-4bsmv_openstack-operators(ee655c82-6748-4bba-9da4-dcf73e0cff37)\"" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-4bsmv" podUID="ee655c82-6748-4bba-9da4-dcf73e0cff37" Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.688287 4706 generic.go:334] "Generic (PLEG): container finished" podID="5726a389-32eb-4f0c-938b-6f2ddbb762e7" containerID="5e7d77c1809cd4777b6b38468940c6d796f1de3c3476a6a7453212e68d632afa" exitCode=1 Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.688996 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-x9x4q" event={"ID":"5726a389-32eb-4f0c-938b-6f2ddbb762e7","Type":"ContainerDied","Data":"5e7d77c1809cd4777b6b38468940c6d796f1de3c3476a6a7453212e68d632afa"} Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.690215 4706 scope.go:117] "RemoveContainer" containerID="5e7d77c1809cd4777b6b38468940c6d796f1de3c3476a6a7453212e68d632afa" Nov 25 12:00:39 crc kubenswrapper[4706]: E1125 12:00:39.690499 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-668c99d594-x9x4q_openstack-operators(5726a389-32eb-4f0c-938b-6f2ddbb762e7)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-x9x4q" podUID="5726a389-32eb-4f0c-938b-6f2ddbb762e7" Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.698043 4706 generic.go:334] "Generic (PLEG): container finished" podID="62e72e86-38e3-4acc-8aa1-664684f27760" containerID="7751303e456ce800516134fda61041e032417ff18b2955ce0bcf84b88c2a204d" exitCode=1 Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.698130 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-bpcjw" event={"ID":"62e72e86-38e3-4acc-8aa1-664684f27760","Type":"ContainerDied","Data":"7751303e456ce800516134fda61041e032417ff18b2955ce0bcf84b88c2a204d"} Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.698857 4706 scope.go:117] "RemoveContainer" containerID="7751303e456ce800516134fda61041e032417ff18b2955ce0bcf84b88c2a204d" Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.702825 4706 generic.go:334] "Generic (PLEG): container finished" podID="6c41fff9-feeb-4311-a7ce-7da3a71b3e9c" containerID="8bcc6c66d2003de20e3894ed5e4c0c7fa24621413e086dd790686ba63d835134" exitCode=1 Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.702876 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-nf6gr" event={"ID":"6c41fff9-feeb-4311-a7ce-7da3a71b3e9c","Type":"ContainerDied","Data":"8bcc6c66d2003de20e3894ed5e4c0c7fa24621413e086dd790686ba63d835134"} Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.703318 4706 scope.go:117] "RemoveContainer" containerID="8bcc6c66d2003de20e3894ed5e4c0c7fa24621413e086dd790686ba63d835134" Nov 25 12:00:39 crc kubenswrapper[4706]: E1125 12:00:39.703554 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=keystone-operator-controller-manager-748dc6576f-nf6gr_openstack-operators(6c41fff9-feeb-4311-a7ce-7da3a71b3e9c)\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-nf6gr" podUID="6c41fff9-feeb-4311-a7ce-7da3a71b3e9c" Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.710822 4706 generic.go:334] "Generic (PLEG): container finished" podID="70fa0d16-065a-463f-8198-06a03414a128" containerID="c3ecece762956e22daadc0e6916cc065ea577f8be51b73cfea13e64948dd4ecc" exitCode=1 Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.711412 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-fslzs" event={"ID":"70fa0d16-065a-463f-8198-06a03414a128","Type":"ContainerDied","Data":"c3ecece762956e22daadc0e6916cc065ea577f8be51b73cfea13e64948dd4ecc"} Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.712003 4706 scope.go:117] "RemoveContainer" containerID="c3ecece762956e22daadc0e6916cc065ea577f8be51b73cfea13e64948dd4ecc" Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.712338 4706 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ce0e2e75-834b-46fb-bc84-229e60f904b1" Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.712359 4706 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ce0e2e75-834b-46fb-bc84-229e60f904b1" Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.882971 4706 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="94660c75-e7e7-468b-b52c-5a097a781232" Nov 25 12:00:39 crc kubenswrapper[4706]: I1125 12:00:39.941130 4706 scope.go:117] "RemoveContainer" containerID="49818e0aa017978b9575f26dea8f4372beabc3340d17d74cd665f3be1e9757ce" Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.015547 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.059136 4706 scope.go:117] "RemoveContainer" containerID="a43b93079f480147c92a5dbde6cde7fc167fb5a7be0101bce13f968d8af9b936" Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.127213 4706 scope.go:117] "RemoveContainer" containerID="b5668e24c52cbb8f3ecf02f7fbbebb42713a3ff64e9d059836e36053d49db4a1" Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.251951 4706 scope.go:117] "RemoveContainer" containerID="727cae160d2cb4b5f6c7224c124e4155d9df0a57e91d16999aed01ca19639ca4" Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.488719 4706 scope.go:117] "RemoveContainer" containerID="77644a2d6098f260cde2c4b6551e02ad0c9a9044dcbb8ac87b2c7404dbfc82b3" Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.685429 4706 scope.go:117] "RemoveContainer" containerID="312041d5294c4c4b83b3c55de78ab9601ca611ae7d1a7c6a837f2c832f489f4d" Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.726023 4706 generic.go:334] "Generic (PLEG): container finished" podID="70fa0d16-065a-463f-8198-06a03414a128" containerID="74dfeb763e6886a59407a60e645fbd45baddd281cbe2f7f8ee80d31cf7b1d8b3" exitCode=1 Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.726102 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-fslzs" event={"ID":"70fa0d16-065a-463f-8198-06a03414a128","Type":"ContainerDied","Data":"74dfeb763e6886a59407a60e645fbd45baddd281cbe2f7f8ee80d31cf7b1d8b3"} Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.727521 4706 scope.go:117] "RemoveContainer" containerID="74dfeb763e6886a59407a60e645fbd45baddd281cbe2f7f8ee80d31cf7b1d8b3" Nov 25 12:00:40 crc kubenswrapper[4706]: E1125 12:00:40.728156 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=manila-operator-controller-manager-58bb8d67cc-fslzs_openstack-operators(70fa0d16-065a-463f-8198-06a03414a128)\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-fslzs" podUID="70fa0d16-065a-463f-8198-06a03414a128" Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.729212 4706 generic.go:334] "Generic (PLEG): container finished" podID="3c582966-ab32-499d-8f1c-95c942dd6bb4" containerID="e0db74fe9e90de1fff19ec89cdc16a0e70b6747ee06ce85ceffd06d4ea07161f" exitCode=1 Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.729270 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tfn29" event={"ID":"3c582966-ab32-499d-8f1c-95c942dd6bb4","Type":"ContainerDied","Data":"e0db74fe9e90de1fff19ec89cdc16a0e70b6747ee06ce85ceffd06d4ea07161f"} Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.729624 4706 scope.go:117] "RemoveContainer" containerID="e0db74fe9e90de1fff19ec89cdc16a0e70b6747ee06ce85ceffd06d4ea07161f" Nov 25 12:00:40 crc kubenswrapper[4706]: E1125 12:00:40.729835 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=neutron-operator-controller-manager-7c57c8bbc4-tfn29_openstack-operators(3c582966-ab32-499d-8f1c-95c942dd6bb4)\"" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tfn29" podUID="3c582966-ab32-499d-8f1c-95c942dd6bb4" Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.732619 4706 generic.go:334] "Generic (PLEG): container finished" podID="4857e509-acac-422c-87e8-2662708da599" containerID="4f8d05659443c7ea56ca378c2a6695d32450f0bc3c798529e4dab6468c1cb7ce" exitCode=1 Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.732669 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-t6c78" event={"ID":"4857e509-acac-422c-87e8-2662708da599","Type":"ContainerDied","Data":"4f8d05659443c7ea56ca378c2a6695d32450f0bc3c798529e4dab6468c1cb7ce"} Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.733011 4706 scope.go:117] "RemoveContainer" containerID="4f8d05659443c7ea56ca378c2a6695d32450f0bc3c798529e4dab6468c1cb7ce" Nov 25 12:00:40 crc kubenswrapper[4706]: E1125 12:00:40.733221 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=glance-operator-controller-manager-68b95954c9-t6c78_openstack-operators(4857e509-acac-422c-87e8-2662708da599)\"" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-t6c78" podUID="4857e509-acac-422c-87e8-2662708da599" Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.735710 4706 generic.go:334] "Generic (PLEG): container finished" podID="2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1" containerID="61f3af2c32e758c04c0727c9990134586c7e8ecac7c2bc6b783202602f918a79" exitCode=1 Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.735767 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-9cb9fb586-5854z" event={"ID":"2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1","Type":"ContainerDied","Data":"61f3af2c32e758c04c0727c9990134586c7e8ecac7c2bc6b783202602f918a79"} Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.736176 4706 scope.go:117] "RemoveContainer" containerID="61f3af2c32e758c04c0727c9990134586c7e8ecac7c2bc6b783202602f918a79" Nov 25 12:00:40 crc kubenswrapper[4706]: E1125 12:00:40.736443 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=openstack-operator-controller-manager-9cb9fb586-5854z_openstack-operators(2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1)\"" pod="openstack-operators/openstack-operator-controller-manager-9cb9fb586-5854z" podUID="2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1" Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.738914 4706 generic.go:334] "Generic (PLEG): container finished" podID="23155e14-a775-48c5-adf9-55dcfd008040" containerID="ef7e5f61a61bf7a3cf1b053affdda0bf46af30ce0bda52a6bec7632d6440e6fa" exitCode=1 Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.738990 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-jh5hc" event={"ID":"23155e14-a775-48c5-adf9-55dcfd008040","Type":"ContainerDied","Data":"ef7e5f61a61bf7a3cf1b053affdda0bf46af30ce0bda52a6bec7632d6440e6fa"} Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.739358 4706 scope.go:117] "RemoveContainer" containerID="ef7e5f61a61bf7a3cf1b053affdda0bf46af30ce0bda52a6bec7632d6440e6fa" Nov 25 12:00:40 crc kubenswrapper[4706]: E1125 12:00:40.739566 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=barbican-operator-controller-manager-86dc4d89c8-jh5hc_openstack-operators(23155e14-a775-48c5-adf9-55dcfd008040)\"" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-jh5hc" podUID="23155e14-a775-48c5-adf9-55dcfd008040" Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.742039 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5789f9b844-cfvkd" event={"ID":"2df5f121-0564-4647-acf6-d09283ff5a94","Type":"ContainerStarted","Data":"3575d61af74d236fb2ab5eba179feda49a7ad65fb5e54e028b8a888a71c52c6a"} Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.742208 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-5789f9b844-cfvkd" Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.744688 4706 generic.go:334] "Generic (PLEG): container finished" podID="9fa65252-7bf5-4e83-beb7-dfcfa63db10d" containerID="aadf818856cf40cc5bb27311e2a0e5af68a351235bd2ff78ace96e5175dcbaae" exitCode=1 Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.744747 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hqsp5" event={"ID":"9fa65252-7bf5-4e83-beb7-dfcfa63db10d","Type":"ContainerDied","Data":"aadf818856cf40cc5bb27311e2a0e5af68a351235bd2ff78ace96e5175dcbaae"} Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.745085 4706 scope.go:117] "RemoveContainer" containerID="aadf818856cf40cc5bb27311e2a0e5af68a351235bd2ff78ace96e5175dcbaae" Nov 25 12:00:40 crc kubenswrapper[4706]: E1125 12:00:40.745336 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=designate-operator-controller-manager-7d695c9b56-hqsp5_openstack-operators(9fa65252-7bf5-4e83-beb7-dfcfa63db10d)\"" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hqsp5" podUID="9fa65252-7bf5-4e83-beb7-dfcfa63db10d" Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.747777 4706 generic.go:334] "Generic (PLEG): container finished" podID="a0668604-b184-4265-b9af-fc6f526d8351" containerID="a6a63ee316ee8f6c1c0dc3e603be4df7625b7a40b6eb74aa3998c132daaae571" exitCode=1 Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.747823 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rwbvj" event={"ID":"a0668604-b184-4265-b9af-fc6f526d8351","Type":"ContainerDied","Data":"a6a63ee316ee8f6c1c0dc3e603be4df7625b7a40b6eb74aa3998c132daaae571"} Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.748434 4706 scope.go:117] "RemoveContainer" containerID="a6a63ee316ee8f6c1c0dc3e603be4df7625b7a40b6eb74aa3998c132daaae571" Nov 25 12:00:40 crc kubenswrapper[4706]: E1125 12:00:40.748746 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=swift-operator-controller-manager-6fdc4fcf86-rwbvj_openstack-operators(a0668604-b184-4265-b9af-fc6f526d8351)\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rwbvj" podUID="a0668604-b184-4265-b9af-fc6f526d8351" Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.759034 4706 scope.go:117] "RemoveContainer" containerID="18f0cfcff6c07f2ca4cccd7935e7fdd089c5403b99b18d48a7835dbcfb895cec" Nov 25 12:00:40 crc kubenswrapper[4706]: E1125 12:00:40.759430 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ironic-operator-controller-manager-5bfcdc958c-l4m6r_openstack-operators(9e5a3424-dd89-4411-872f-70447506cf73)\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-l4m6r" podUID="9e5a3424-dd89-4411-872f-70447506cf73" Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.761149 4706 generic.go:334] "Generic (PLEG): container finished" podID="c6de3b19-c207-4c00-8350-de810fb1f555" containerID="53384e10a33d567f69a8ca7eb18df18ae3c2e018916094498dc1e9c70ae6b819" exitCode=1 Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.761213 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-9bz4f" event={"ID":"c6de3b19-c207-4c00-8350-de810fb1f555","Type":"ContainerDied","Data":"53384e10a33d567f69a8ca7eb18df18ae3c2e018916094498dc1e9c70ae6b819"} Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.762205 4706 scope.go:117] "RemoveContainer" containerID="53384e10a33d567f69a8ca7eb18df18ae3c2e018916094498dc1e9c70ae6b819" Nov 25 12:00:40 crc kubenswrapper[4706]: E1125 12:00:40.762503 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=heat-operator-controller-manager-774b86978c-9bz4f_openstack-operators(c6de3b19-c207-4c00-8350-de810fb1f555)\"" pod="openstack-operators/heat-operator-controller-manager-774b86978c-9bz4f" podUID="c6de3b19-c207-4c00-8350-de810fb1f555" Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.778228 4706 generic.go:334] "Generic (PLEG): container finished" podID="72bbe536-121d-47c0-b473-2974b238f271" containerID="ef2d657f5558b3ac852d69ea5b513db79b4302184287ea5c0382451833e899ff" exitCode=1 Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.778469 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zx4v6" event={"ID":"72bbe536-121d-47c0-b473-2974b238f271","Type":"ContainerDied","Data":"ef2d657f5558b3ac852d69ea5b513db79b4302184287ea5c0382451833e899ff"} Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.779562 4706 scope.go:117] "RemoveContainer" containerID="ef2d657f5558b3ac852d69ea5b513db79b4302184287ea5c0382451833e899ff" Nov 25 12:00:40 crc kubenswrapper[4706]: E1125 12:00:40.779900 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=horizon-operator-controller-manager-68c9694994-zx4v6_openstack-operators(72bbe536-121d-47c0-b473-2974b238f271)\"" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zx4v6" podUID="72bbe536-121d-47c0-b473-2974b238f271" Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.786036 4706 generic.go:334] "Generic (PLEG): container finished" podID="62e72e86-38e3-4acc-8aa1-664684f27760" containerID="bd4b32407fc1b555b8978b1e64da816941a645cd5f67a6ce935b7e5ca0e50e13" exitCode=1 Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.786134 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-bpcjw" event={"ID":"62e72e86-38e3-4acc-8aa1-664684f27760","Type":"ContainerDied","Data":"bd4b32407fc1b555b8978b1e64da816941a645cd5f67a6ce935b7e5ca0e50e13"} Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.786774 4706 scope.go:117] "RemoveContainer" containerID="bd4b32407fc1b555b8978b1e64da816941a645cd5f67a6ce935b7e5ca0e50e13" Nov 25 12:00:40 crc kubenswrapper[4706]: E1125 12:00:40.787085 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=mariadb-operator-controller-manager-cb6c4fdb7-bpcjw_openstack-operators(62e72e86-38e3-4acc-8aa1-664684f27760)\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-bpcjw" podUID="62e72e86-38e3-4acc-8aa1-664684f27760" Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.791786 4706 generic.go:334] "Generic (PLEG): container finished" podID="61b1ec50-3228-43bc-bb09-d74a7f02be52" containerID="45575b580dd3604071bbfe6d7478f5cce4c5b94c9bd593825660f95dafda6d8f" exitCode=1 Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.791838 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-nc6f7" event={"ID":"61b1ec50-3228-43bc-bb09-d74a7f02be52","Type":"ContainerDied","Data":"45575b580dd3604071bbfe6d7478f5cce4c5b94c9bd593825660f95dafda6d8f"} Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.797448 4706 generic.go:334] "Generic (PLEG): container finished" podID="e204aa88-c108-491e-9a73-2fca5c2ef15c" containerID="0d57fe1921c6d00af0f49dc1ab2240ace7cb30580498b0eb194a6acc0908dbdc" exitCode=1 Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.797518 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-rfz7f" event={"ID":"e204aa88-c108-491e-9a73-2fca5c2ef15c","Type":"ContainerDied","Data":"0d57fe1921c6d00af0f49dc1ab2240ace7cb30580498b0eb194a6acc0908dbdc"} Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.797808 4706 scope.go:117] "RemoveContainer" containerID="45575b580dd3604071bbfe6d7478f5cce4c5b94c9bd593825660f95dafda6d8f" Nov 25 12:00:40 crc kubenswrapper[4706]: E1125 12:00:40.798105 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ovn-operator-controller-manager-66cf5c67ff-nc6f7_openstack-operators(61b1ec50-3228-43bc-bb09-d74a7f02be52)\"" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-nc6f7" podUID="61b1ec50-3228-43bc-bb09-d74a7f02be52" Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.798564 4706 scope.go:117] "RemoveContainer" containerID="0d57fe1921c6d00af0f49dc1ab2240ace7cb30580498b0eb194a6acc0908dbdc" Nov 25 12:00:40 crc kubenswrapper[4706]: E1125 12:00:40.798900 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=infra-operator-controller-manager-d5cc86f4b-rfz7f_openstack-operators(e204aa88-c108-491e-9a73-2fca5c2ef15c)\"" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-rfz7f" podUID="e204aa88-c108-491e-9a73-2fca5c2ef15c" Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.806590 4706 scope.go:117] "RemoveContainer" containerID="8bcc6c66d2003de20e3894ed5e4c0c7fa24621413e086dd790686ba63d835134" Nov 25 12:00:40 crc kubenswrapper[4706]: E1125 12:00:40.806842 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=keystone-operator-controller-manager-748dc6576f-nf6gr_openstack-operators(6c41fff9-feeb-4311-a7ce-7da3a71b3e9c)\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-nf6gr" podUID="6c41fff9-feeb-4311-a7ce-7da3a71b3e9c" Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.832573 4706 scope.go:117] "RemoveContainer" containerID="a4f76f11e3a12d3ed74cd38d05e887277ad85a13b0e7f5c7c2a40389bbde69f2" Nov 25 12:00:40 crc kubenswrapper[4706]: I1125 12:00:40.988470 4706 scope.go:117] "RemoveContainer" containerID="68996614537b4d8b8f9cf530cc12d048f8db2259bff6001bebd61362965c380d" Nov 25 12:00:41 crc kubenswrapper[4706]: I1125 12:00:41.115868 4706 scope.go:117] "RemoveContainer" containerID="c3ecece762956e22daadc0e6916cc065ea577f8be51b73cfea13e64948dd4ecc" Nov 25 12:00:41 crc kubenswrapper[4706]: I1125 12:00:41.196276 4706 scope.go:117] "RemoveContainer" containerID="1c4344b8b04c4ceec82bad456d74fd47040eef6a9f76f1d60a95a4a90b0fdad9" Nov 25 12:00:41 crc kubenswrapper[4706]: I1125 12:00:41.329038 4706 scope.go:117] "RemoveContainer" containerID="a3fab4850794bd28ca3ba88d877ddf98f3e4822e0f4620b74501334d09426807" Nov 25 12:00:41 crc kubenswrapper[4706]: I1125 12:00:41.423093 4706 scope.go:117] "RemoveContainer" containerID="1e58195af2efe7fbff79413b9c95bbeec15ed12b8f39f76667ab5de3c4ffdf54" Nov 25 12:00:41 crc kubenswrapper[4706]: I1125 12:00:41.499207 4706 scope.go:117] "RemoveContainer" containerID="c2c4e1bb27ca7d9c5c5b1c7f8f4ed76c65b60e421b1f9b74443af46355e7dbac" Nov 25 12:00:41 crc kubenswrapper[4706]: I1125 12:00:41.534146 4706 scope.go:117] "RemoveContainer" containerID="126cca5a246b8e52e5ac0d4a31f6fa218a7942f9dad0193ce336826b864a793e" Nov 25 12:00:41 crc kubenswrapper[4706]: I1125 12:00:41.613138 4706 scope.go:117] "RemoveContainer" containerID="bcd613173c6ad5d898feaae3fdc682a81d560c9a5c1a5577993fb3dd790cd961" Nov 25 12:00:41 crc kubenswrapper[4706]: I1125 12:00:41.732437 4706 scope.go:117] "RemoveContainer" containerID="d1cebeba280b3a9494646903e1229d60dc042d5fd7291dc89497ceb5c203f034" Nov 25 12:00:41 crc kubenswrapper[4706]: I1125 12:00:41.793421 4706 scope.go:117] "RemoveContainer" containerID="b546f8f61c11277a9e3ec051e9d83bfbe0186407b7fd51031bc317fe61e2643b" Nov 25 12:00:41 crc kubenswrapper[4706]: I1125 12:00:41.841097 4706 generic.go:334] "Generic (PLEG): container finished" podID="04e7a5d0-b5fe-4a58-b015-339cc1218c6e" containerID="b2110b017c561be6a8594dfbd82ff8886504d9605fbdb38f1ae9c06d61eaa857" exitCode=1 Nov 25 12:00:41 crc kubenswrapper[4706]: I1125 12:00:41.841116 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"04e7a5d0-b5fe-4a58-b015-339cc1218c6e","Type":"ContainerDied","Data":"b2110b017c561be6a8594dfbd82ff8886504d9605fbdb38f1ae9c06d61eaa857"} Nov 25 12:00:41 crc kubenswrapper[4706]: I1125 12:00:41.841769 4706 scope.go:117] "RemoveContainer" containerID="b2110b017c561be6a8594dfbd82ff8886504d9605fbdb38f1ae9c06d61eaa857" Nov 25 12:00:41 crc kubenswrapper[4706]: E1125 12:00:41.842109 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-state-metrics pod=kube-state-metrics-0_openstack(04e7a5d0-b5fe-4a58-b015-339cc1218c6e)\"" pod="openstack/kube-state-metrics-0" podUID="04e7a5d0-b5fe-4a58-b015-339cc1218c6e" Nov 25 12:00:41 crc kubenswrapper[4706]: I1125 12:00:41.934355 4706 scope.go:117] "RemoveContainer" containerID="7751303e456ce800516134fda61041e032417ff18b2955ce0bcf84b88c2a204d" Nov 25 12:00:42 crc kubenswrapper[4706]: I1125 12:00:42.174607 4706 scope.go:117] "RemoveContainer" containerID="fb68eae3767f5e42de2dc8e408ae9722d3ce773a6ebbed0bfcd8c3393c4e1608" Nov 25 12:00:42 crc kubenswrapper[4706]: I1125 12:00:42.419956 4706 scope.go:117] "RemoveContainer" containerID="827f838f0fc8d981651efe078b754d226e3c5f8443dbf18eec0c9b627c35c189" Nov 25 12:00:42 crc kubenswrapper[4706]: I1125 12:00:42.675993 4706 scope.go:117] "RemoveContainer" containerID="79f9d89704437bc055794a07cdf075a5908c012c2571fa608b8963523579851c" Nov 25 12:00:42 crc kubenswrapper[4706]: I1125 12:00:42.879630 4706 scope.go:117] "RemoveContainer" containerID="b2110b017c561be6a8594dfbd82ff8886504d9605fbdb38f1ae9c06d61eaa857" Nov 25 12:00:42 crc kubenswrapper[4706]: E1125 12:00:42.881692 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-state-metrics pod=kube-state-metrics-0_openstack(04e7a5d0-b5fe-4a58-b015-339cc1218c6e)\"" pod="openstack/kube-state-metrics-0" podUID="04e7a5d0-b5fe-4a58-b015-339cc1218c6e" Nov 25 12:00:42 crc kubenswrapper[4706]: I1125 12:00:42.923026 4706 scope.go:117] "RemoveContainer" containerID="ab70ce8aca25b2944e1164b6f8280f1185501f4e0e1177f60e946980080ac735" Nov 25 12:00:43 crc kubenswrapper[4706]: I1125 12:00:43.892650 4706 generic.go:334] "Generic (PLEG): container finished" podID="cdb2d830-fbc9-4336-83b7-0392051670cb" containerID="0c8124275bdfdf469c0e067b64968e66e892c3e8a689b45338d017de75edaab8" exitCode=1 Nov 25 12:00:43 crc kubenswrapper[4706]: I1125 12:00:43.892716 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" event={"ID":"cdb2d830-fbc9-4336-83b7-0392051670cb","Type":"ContainerDied","Data":"0c8124275bdfdf469c0e067b64968e66e892c3e8a689b45338d017de75edaab8"} Nov 25 12:00:43 crc kubenswrapper[4706]: I1125 12:00:43.893015 4706 scope.go:117] "RemoveContainer" containerID="ab70ce8aca25b2944e1164b6f8280f1185501f4e0e1177f60e946980080ac735" Nov 25 12:00:43 crc kubenswrapper[4706]: I1125 12:00:43.893794 4706 scope.go:117] "RemoveContainer" containerID="0c8124275bdfdf469c0e067b64968e66e892c3e8a689b45338d017de75edaab8" Nov 25 12:00:43 crc kubenswrapper[4706]: E1125 12:00:43.894148 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=metallb-operator-controller-manager-7d76b4f6c7-xxkgj_metallb-system(cdb2d830-fbc9-4336-83b7-0392051670cb)\"" pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" podUID="cdb2d830-fbc9-4336-83b7-0392051670cb" Nov 25 12:00:44 crc kubenswrapper[4706]: I1125 12:00:44.659242 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-jh5hc" Nov 25 12:00:44 crc kubenswrapper[4706]: I1125 12:00:44.660401 4706 scope.go:117] "RemoveContainer" containerID="ef7e5f61a61bf7a3cf1b053affdda0bf46af30ce0bda52a6bec7632d6440e6fa" Nov 25 12:00:44 crc kubenswrapper[4706]: E1125 12:00:44.660940 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=barbican-operator-controller-manager-86dc4d89c8-jh5hc_openstack-operators(23155e14-a775-48c5-adf9-55dcfd008040)\"" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-jh5hc" podUID="23155e14-a775-48c5-adf9-55dcfd008040" Nov 25 12:00:44 crc kubenswrapper[4706]: I1125 12:00:44.681811 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-4bsmv" Nov 25 12:00:44 crc kubenswrapper[4706]: I1125 12:00:44.682868 4706 scope.go:117] "RemoveContainer" containerID="e47b18a47a3c07e2621e6d16d464c800ca4775ecfde041d46f44c4816bbb48a8" Nov 25 12:00:44 crc kubenswrapper[4706]: E1125 12:00:44.683266 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=cinder-operator-controller-manager-79856dc55c-4bsmv_openstack-operators(ee655c82-6748-4bba-9da4-dcf73e0cff37)\"" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-4bsmv" podUID="ee655c82-6748-4bba-9da4-dcf73e0cff37" Nov 25 12:00:44 crc kubenswrapper[4706]: I1125 12:00:44.690894 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hqsp5" Nov 25 12:00:44 crc kubenswrapper[4706]: I1125 12:00:44.691697 4706 scope.go:117] "RemoveContainer" containerID="aadf818856cf40cc5bb27311e2a0e5af68a351235bd2ff78ace96e5175dcbaae" Nov 25 12:00:44 crc kubenswrapper[4706]: E1125 12:00:44.692027 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=designate-operator-controller-manager-7d695c9b56-hqsp5_openstack-operators(9fa65252-7bf5-4e83-beb7-dfcfa63db10d)\"" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hqsp5" podUID="9fa65252-7bf5-4e83-beb7-dfcfa63db10d" Nov 25 12:00:44 crc kubenswrapper[4706]: I1125 12:00:44.765416 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-9bz4f" Nov 25 12:00:44 crc kubenswrapper[4706]: I1125 12:00:44.766503 4706 scope.go:117] "RemoveContainer" containerID="53384e10a33d567f69a8ca7eb18df18ae3c2e018916094498dc1e9c70ae6b819" Nov 25 12:00:44 crc kubenswrapper[4706]: E1125 12:00:44.766988 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=heat-operator-controller-manager-774b86978c-9bz4f_openstack-operators(c6de3b19-c207-4c00-8350-de810fb1f555)\"" pod="openstack-operators/heat-operator-controller-manager-774b86978c-9bz4f" podUID="c6de3b19-c207-4c00-8350-de810fb1f555" Nov 25 12:00:44 crc kubenswrapper[4706]: I1125 12:00:44.863370 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zx4v6" Nov 25 12:00:44 crc kubenswrapper[4706]: I1125 12:00:44.864751 4706 scope.go:117] "RemoveContainer" containerID="ef2d657f5558b3ac852d69ea5b513db79b4302184287ea5c0382451833e899ff" Nov 25 12:00:44 crc kubenswrapper[4706]: E1125 12:00:44.865187 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=horizon-operator-controller-manager-68c9694994-zx4v6_openstack-operators(72bbe536-121d-47c0-b473-2974b238f271)\"" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zx4v6" podUID="72bbe536-121d-47c0-b473-2974b238f271" Nov 25 12:00:44 crc kubenswrapper[4706]: I1125 12:00:44.965689 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-rfz7f" Nov 25 12:00:44 crc kubenswrapper[4706]: I1125 12:00:44.966660 4706 scope.go:117] "RemoveContainer" containerID="0d57fe1921c6d00af0f49dc1ab2240ace7cb30580498b0eb194a6acc0908dbdc" Nov 25 12:00:44 crc kubenswrapper[4706]: E1125 12:00:44.967204 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=infra-operator-controller-manager-d5cc86f4b-rfz7f_openstack-operators(e204aa88-c108-491e-9a73-2fca5c2ef15c)\"" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-rfz7f" podUID="e204aa88-c108-491e-9a73-2fca5c2ef15c" Nov 25 12:00:44 crc kubenswrapper[4706]: I1125 12:00:44.983421 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-nf6gr" Nov 25 12:00:44 crc kubenswrapper[4706]: I1125 12:00:44.984424 4706 scope.go:117] "RemoveContainer" containerID="8bcc6c66d2003de20e3894ed5e4c0c7fa24621413e086dd790686ba63d835134" Nov 25 12:00:44 crc kubenswrapper[4706]: E1125 12:00:44.984898 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=keystone-operator-controller-manager-748dc6576f-nf6gr_openstack-operators(6c41fff9-feeb-4311-a7ce-7da3a71b3e9c)\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-nf6gr" podUID="6c41fff9-feeb-4311-a7ce-7da3a71b3e9c" Nov 25 12:00:45 crc kubenswrapper[4706]: I1125 12:00:45.033670 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 25 12:00:45 crc kubenswrapper[4706]: I1125 12:00:45.046269 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-t6c78" Nov 25 12:00:45 crc kubenswrapper[4706]: I1125 12:00:45.047101 4706 scope.go:117] "RemoveContainer" containerID="4f8d05659443c7ea56ca378c2a6695d32450f0bc3c798529e4dab6468c1cb7ce" Nov 25 12:00:45 crc kubenswrapper[4706]: E1125 12:00:45.047399 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=glance-operator-controller-manager-68b95954c9-t6c78_openstack-operators(4857e509-acac-422c-87e8-2662708da599)\"" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-t6c78" podUID="4857e509-acac-422c-87e8-2662708da599" Nov 25 12:00:45 crc kubenswrapper[4706]: I1125 12:00:45.134147 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-fslzs" Nov 25 12:00:45 crc kubenswrapper[4706]: I1125 12:00:45.135363 4706 scope.go:117] "RemoveContainer" containerID="74dfeb763e6886a59407a60e645fbd45baddd281cbe2f7f8ee80d31cf7b1d8b3" Nov 25 12:00:45 crc kubenswrapper[4706]: E1125 12:00:45.135720 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=manila-operator-controller-manager-58bb8d67cc-fslzs_openstack-operators(70fa0d16-065a-463f-8198-06a03414a128)\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-fslzs" podUID="70fa0d16-065a-463f-8198-06a03414a128" Nov 25 12:00:45 crc kubenswrapper[4706]: I1125 12:00:45.169372 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-bpcjw" Nov 25 12:00:45 crc kubenswrapper[4706]: I1125 12:00:45.170528 4706 scope.go:117] "RemoveContainer" containerID="bd4b32407fc1b555b8978b1e64da816941a645cd5f67a6ce935b7e5ca0e50e13" Nov 25 12:00:45 crc kubenswrapper[4706]: E1125 12:00:45.170823 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=mariadb-operator-controller-manager-cb6c4fdb7-bpcjw_openstack-operators(62e72e86-38e3-4acc-8aa1-664684f27760)\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-bpcjw" podUID="62e72e86-38e3-4acc-8aa1-664684f27760" Nov 25 12:00:45 crc kubenswrapper[4706]: I1125 12:00:45.209446 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tfn29" Nov 25 12:00:45 crc kubenswrapper[4706]: I1125 12:00:45.210755 4706 scope.go:117] "RemoveContainer" containerID="e0db74fe9e90de1fff19ec89cdc16a0e70b6747ee06ce85ceffd06d4ea07161f" Nov 25 12:00:45 crc kubenswrapper[4706]: E1125 12:00:45.211190 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=neutron-operator-controller-manager-7c57c8bbc4-tfn29_openstack-operators(3c582966-ab32-499d-8f1c-95c942dd6bb4)\"" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tfn29" podUID="3c582966-ab32-499d-8f1c-95c942dd6bb4" Nov 25 12:00:45 crc kubenswrapper[4706]: I1125 12:00:45.216086 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k7crl" Nov 25 12:00:45 crc kubenswrapper[4706]: I1125 12:00:45.217333 4706 scope.go:117] "RemoveContainer" containerID="5aa2d062bee571f40f50fb1d425672051c914cfcd57df5100254b6b32c8ee09c" Nov 25 12:00:45 crc kubenswrapper[4706]: E1125 12:00:45.217935 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=placement-operator-controller-manager-5db546f9d9-k7crl_openstack-operators(eab1279c-c99a-450e-887b-d246a2ff01aa)\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k7crl" podUID="eab1279c-c99a-450e-887b-d246a2ff01aa" Nov 25 12:00:45 crc kubenswrapper[4706]: I1125 12:00:45.338758 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-f47gl" Nov 25 12:00:45 crc kubenswrapper[4706]: I1125 12:00:45.339426 4706 scope.go:117] "RemoveContainer" containerID="bbcbd5e92b3c8020116399644b123c6a0ecf44834665b167b35151fb974c3f10" Nov 25 12:00:45 crc kubenswrapper[4706]: E1125 12:00:45.339678 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=nova-operator-controller-manager-79556f57fc-f47gl_openstack-operators(1c035858-a349-4415-8a5d-f3f2edb7c84e)\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-f47gl" podUID="1c035858-a349-4415-8a5d-f3f2edb7c84e" Nov 25 12:00:45 crc kubenswrapper[4706]: I1125 12:00:45.375546 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-2tmzq" Nov 25 12:00:45 crc kubenswrapper[4706]: I1125 12:00:45.376209 4706 scope.go:117] "RemoveContainer" containerID="382e456c6fbd763bd7078807a9f97276eee0e98a5f9e81429cf721d7d43cbf64" Nov 25 12:00:45 crc kubenswrapper[4706]: E1125 12:00:45.376616 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=octavia-operator-controller-manager-fd75fd47d-2tmzq_openstack-operators(063b2f44-faa1-4a58-b77b-f2140f569b01)\"" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-2tmzq" podUID="063b2f44-faa1-4a58-b77b-f2140f569b01" Nov 25 12:00:45 crc kubenswrapper[4706]: I1125 12:00:45.391411 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-nc6f7" Nov 25 12:00:45 crc kubenswrapper[4706]: I1125 12:00:45.392169 4706 scope.go:117] "RemoveContainer" containerID="45575b580dd3604071bbfe6d7478f5cce4c5b94c9bd593825660f95dafda6d8f" Nov 25 12:00:45 crc kubenswrapper[4706]: E1125 12:00:45.392494 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ovn-operator-controller-manager-66cf5c67ff-nc6f7_openstack-operators(61b1ec50-3228-43bc-bb09-d74a7f02be52)\"" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-nc6f7" podUID="61b1ec50-3228-43bc-bb09-d74a7f02be52" Nov 25 12:00:45 crc kubenswrapper[4706]: I1125 12:00:45.542410 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rwbvj" Nov 25 12:00:45 crc kubenswrapper[4706]: I1125 12:00:45.543331 4706 scope.go:117] "RemoveContainer" containerID="a6a63ee316ee8f6c1c0dc3e603be4df7625b7a40b6eb74aa3998c132daaae571" Nov 25 12:00:45 crc kubenswrapper[4706]: E1125 12:00:45.543697 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=swift-operator-controller-manager-6fdc4fcf86-rwbvj_openstack-operators(a0668604-b184-4265-b9af-fc6f526d8351)\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rwbvj" podUID="a0668604-b184-4265-b9af-fc6f526d8351" Nov 25 12:00:45 crc kubenswrapper[4706]: I1125 12:00:45.593320 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-8p5t2" Nov 25 12:00:45 crc kubenswrapper[4706]: I1125 12:00:45.594457 4706 scope.go:117] "RemoveContainer" containerID="2f9e63b9b2b55d5cbd2f8d076fdd74fe65c68c1401d54183033d597c9e0ca237" Nov 25 12:00:45 crc kubenswrapper[4706]: E1125 12:00:45.594841 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=telemetry-operator-controller-manager-567f98c9d-8p5t2_openstack-operators(a7a52f28-6bc4-481d-8513-16dbb7b37ae1)\"" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-8p5t2" podUID="a7a52f28-6bc4-481d-8513-16dbb7b37ae1" Nov 25 12:00:45 crc kubenswrapper[4706]: I1125 12:00:45.630887 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cb74df96-8rlr7" Nov 25 12:00:45 crc kubenswrapper[4706]: I1125 12:00:45.756583 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-864885998-9s7hm" Nov 25 12:00:45 crc kubenswrapper[4706]: I1125 12:00:45.757315 4706 scope.go:117] "RemoveContainer" containerID="98356e4566939db6aa79c8b5c2952865d0a73175246366956905475dff958f76" Nov 25 12:00:45 crc kubenswrapper[4706]: E1125 12:00:45.757609 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=watcher-operator-controller-manager-864885998-9s7hm_openstack-operators(6b8e15c0-a70f-4b4c-8836-a2c4e7b23f60)\"" pod="openstack-operators/watcher-operator-controller-manager-864885998-9s7hm" podUID="6b8e15c0-a70f-4b4c-8836-a2c4e7b23f60" Nov 25 12:00:46 crc kubenswrapper[4706]: I1125 12:00:46.122625 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:00:47 crc kubenswrapper[4706]: I1125 12:00:47.051226 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 25 12:00:47 crc kubenswrapper[4706]: I1125 12:00:47.244422 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 25 12:00:47 crc kubenswrapper[4706]: I1125 12:00:47.293383 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" Nov 25 12:00:47 crc kubenswrapper[4706]: I1125 12:00:47.294279 4706 scope.go:117] "RemoveContainer" containerID="0c8124275bdfdf469c0e067b64968e66e892c3e8a689b45338d017de75edaab8" Nov 25 12:00:47 crc kubenswrapper[4706]: E1125 12:00:47.294639 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=metallb-operator-controller-manager-7d76b4f6c7-xxkgj_metallb-system(cdb2d830-fbc9-4336-83b7-0392051670cb)\"" pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" podUID="cdb2d830-fbc9-4336-83b7-0392051670cb" Nov 25 12:00:47 crc kubenswrapper[4706]: I1125 12:00:47.668934 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 25 12:00:47 crc kubenswrapper[4706]: I1125 12:00:47.680088 4706 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 25 12:00:47 crc kubenswrapper[4706]: I1125 12:00:47.845211 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 12:00:47 crc kubenswrapper[4706]: I1125 12:00:47.845261 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/kube-state-metrics-0" Nov 25 12:00:47 crc kubenswrapper[4706]: I1125 12:00:47.845954 4706 scope.go:117] "RemoveContainer" containerID="b2110b017c561be6a8594dfbd82ff8886504d9605fbdb38f1ae9c06d61eaa857" Nov 25 12:00:47 crc kubenswrapper[4706]: E1125 12:00:47.846223 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-state-metrics pod=kube-state-metrics-0_openstack(04e7a5d0-b5fe-4a58-b015-339cc1218c6e)\"" pod="openstack/kube-state-metrics-0" podUID="04e7a5d0-b5fe-4a58-b015-339cc1218c6e" Nov 25 12:00:47 crc kubenswrapper[4706]: I1125 12:00:47.879463 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 25 12:00:48 crc kubenswrapper[4706]: I1125 12:00:48.070576 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 25 12:00:48 crc kubenswrapper[4706]: I1125 12:00:48.189379 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 25 12:00:48 crc kubenswrapper[4706]: I1125 12:00:48.497787 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 25 12:00:48 crc kubenswrapper[4706]: I1125 12:00:48.506880 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-gsml7" Nov 25 12:00:48 crc kubenswrapper[4706]: I1125 12:00:48.573486 4706 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-7cdjf" Nov 25 12:00:48 crc kubenswrapper[4706]: I1125 12:00:48.583334 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 25 12:00:48 crc kubenswrapper[4706]: I1125 12:00:48.748844 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 25 12:00:48 crc kubenswrapper[4706]: I1125 12:00:48.756159 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk" Nov 25 12:00:48 crc kubenswrapper[4706]: I1125 12:00:48.813260 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 25 12:00:48 crc kubenswrapper[4706]: I1125 12:00:48.849765 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 25 12:00:48 crc kubenswrapper[4706]: I1125 12:00:48.889563 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 25 12:00:48 crc kubenswrapper[4706]: I1125 12:00:48.969729 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 25 12:00:49 crc kubenswrapper[4706]: I1125 12:00:49.075349 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 25 12:00:49 crc kubenswrapper[4706]: I1125 12:00:49.090012 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 25 12:00:49 crc kubenswrapper[4706]: I1125 12:00:49.110756 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 25 12:00:49 crc kubenswrapper[4706]: I1125 12:00:49.116985 4706 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 25 12:00:49 crc kubenswrapper[4706]: I1125 12:00:49.117036 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 25 12:00:49 crc kubenswrapper[4706]: I1125 12:00:49.138395 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 25 12:00:49 crc kubenswrapper[4706]: I1125 12:00:49.184770 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-bnpw5" Nov 25 12:00:49 crc kubenswrapper[4706]: I1125 12:00:49.212650 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 25 12:00:49 crc kubenswrapper[4706]: I1125 12:00:49.281038 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 25 12:00:49 crc kubenswrapper[4706]: I1125 12:00:49.302952 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 25 12:00:49 crc kubenswrapper[4706]: I1125 12:00:49.364694 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 25 12:00:49 crc kubenswrapper[4706]: I1125 12:00:49.389813 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 25 12:00:49 crc kubenswrapper[4706]: I1125 12:00:49.390879 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-9cb9fb586-5854z" Nov 25 12:00:49 crc kubenswrapper[4706]: I1125 12:00:49.391380 4706 scope.go:117] "RemoveContainer" containerID="61f3af2c32e758c04c0727c9990134586c7e8ecac7c2bc6b783202602f918a79" Nov 25 12:00:49 crc kubenswrapper[4706]: E1125 12:00:49.391598 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=openstack-operator-controller-manager-9cb9fb586-5854z_openstack-operators(2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1)\"" pod="openstack-operators/openstack-operator-controller-manager-9cb9fb586-5854z" podUID="2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1" Nov 25 12:00:49 crc kubenswrapper[4706]: I1125 12:00:49.405917 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 25 12:00:49 crc kubenswrapper[4706]: I1125 12:00:49.447460 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 25 12:00:49 crc kubenswrapper[4706]: I1125 12:00:49.549937 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 25 12:00:49 crc kubenswrapper[4706]: I1125 12:00:49.637526 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 25 12:00:49 crc kubenswrapper[4706]: I1125 12:00:49.662083 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 12:00:49 crc kubenswrapper[4706]: I1125 12:00:49.760814 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 25 12:00:49 crc kubenswrapper[4706]: I1125 12:00:49.798311 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 25 12:00:49 crc kubenswrapper[4706]: I1125 12:00:49.824537 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 25 12:00:49 crc kubenswrapper[4706]: I1125 12:00:49.866627 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 25 12:00:49 crc kubenswrapper[4706]: I1125 12:00:49.966448 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 25 12:00:49 crc kubenswrapper[4706]: I1125 12:00:49.966573 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 25 12:00:50 crc kubenswrapper[4706]: I1125 12:00:50.013989 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-vdzbk" Nov 25 12:00:50 crc kubenswrapper[4706]: I1125 12:00:50.078657 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 25 12:00:50 crc kubenswrapper[4706]: I1125 12:00:50.163747 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 25 12:00:50 crc kubenswrapper[4706]: I1125 12:00:50.415874 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 25 12:00:50 crc kubenswrapper[4706]: I1125 12:00:50.555593 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 12:00:50 crc kubenswrapper[4706]: I1125 12:00:50.583813 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 25 12:00:50 crc kubenswrapper[4706]: I1125 12:00:50.614722 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 25 12:00:50 crc kubenswrapper[4706]: I1125 12:00:50.667669 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 25 12:00:50 crc kubenswrapper[4706]: I1125 12:00:50.710965 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 25 12:00:50 crc kubenswrapper[4706]: I1125 12:00:50.750639 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 25 12:00:50 crc kubenswrapper[4706]: I1125 12:00:50.766998 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 25 12:00:50 crc kubenswrapper[4706]: I1125 12:00:50.799274 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-zg5d7" Nov 25 12:00:50 crc kubenswrapper[4706]: I1125 12:00:50.835956 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 25 12:00:50 crc kubenswrapper[4706]: I1125 12:00:50.922122 4706 scope.go:117] "RemoveContainer" containerID="18f0cfcff6c07f2ca4cccd7935e7fdd089c5403b99b18d48a7835dbcfb895cec" Nov 25 12:00:50 crc kubenswrapper[4706]: I1125 12:00:50.936527 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-whr6h" Nov 25 12:00:50 crc kubenswrapper[4706]: I1125 12:00:50.946369 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 25 12:00:50 crc kubenswrapper[4706]: I1125 12:00:50.952261 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 25 12:00:50 crc kubenswrapper[4706]: I1125 12:00:50.954639 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 25 12:00:51 crc kubenswrapper[4706]: I1125 12:00:51.016768 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 25 12:00:51 crc kubenswrapper[4706]: I1125 12:00:51.073035 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 25 12:00:51 crc kubenswrapper[4706]: I1125 12:00:51.087480 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 25 12:00:51 crc kubenswrapper[4706]: I1125 12:00:51.120449 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-lk58c" Nov 25 12:00:51 crc kubenswrapper[4706]: I1125 12:00:51.139032 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 25 12:00:51 crc kubenswrapper[4706]: I1125 12:00:51.162346 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 25 12:00:51 crc kubenswrapper[4706]: I1125 12:00:51.184893 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-wdhpk" Nov 25 12:00:51 crc kubenswrapper[4706]: I1125 12:00:51.241239 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 25 12:00:51 crc kubenswrapper[4706]: I1125 12:00:51.264213 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 25 12:00:51 crc kubenswrapper[4706]: I1125 12:00:51.286023 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-z7mtb" Nov 25 12:00:51 crc kubenswrapper[4706]: I1125 12:00:51.315713 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 25 12:00:51 crc kubenswrapper[4706]: I1125 12:00:51.332685 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 25 12:00:51 crc kubenswrapper[4706]: I1125 12:00:51.405059 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 25 12:00:51 crc kubenswrapper[4706]: I1125 12:00:51.405579 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 25 12:00:51 crc kubenswrapper[4706]: I1125 12:00:51.422333 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-ztnhk" Nov 25 12:00:51 crc kubenswrapper[4706]: I1125 12:00:51.516466 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 25 12:00:51 crc kubenswrapper[4706]: I1125 12:00:51.578771 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 25 12:00:51 crc kubenswrapper[4706]: I1125 12:00:51.584235 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 25 12:00:51 crc kubenswrapper[4706]: I1125 12:00:51.598502 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 25 12:00:51 crc kubenswrapper[4706]: I1125 12:00:51.644113 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 25 12:00:51 crc kubenswrapper[4706]: I1125 12:00:51.703350 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 25 12:00:51 crc kubenswrapper[4706]: I1125 12:00:51.718032 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 25 12:00:51 crc kubenswrapper[4706]: I1125 12:00:51.749219 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 25 12:00:51 crc kubenswrapper[4706]: I1125 12:00:51.757906 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-b2nhx" Nov 25 12:00:51 crc kubenswrapper[4706]: I1125 12:00:51.790850 4706 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 25 12:00:51 crc kubenswrapper[4706]: I1125 12:00:51.836738 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-ljpcz" Nov 25 12:00:51 crc kubenswrapper[4706]: I1125 12:00:51.863549 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 25 12:00:51 crc kubenswrapper[4706]: I1125 12:00:51.941788 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 25 12:00:51 crc kubenswrapper[4706]: I1125 12:00:51.942159 4706 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-n25zr" Nov 25 12:00:51 crc kubenswrapper[4706]: I1125 12:00:51.945377 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 12:00:51 crc kubenswrapper[4706]: I1125 12:00:51.961475 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 25 12:00:51 crc kubenswrapper[4706]: I1125 12:00:51.963830 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 25 12:00:51 crc kubenswrapper[4706]: I1125 12:00:51.974168 4706 scope.go:117] "RemoveContainer" containerID="5e7d77c1809cd4777b6b38468940c6d796f1de3c3476a6a7453212e68d632afa" Nov 25 12:00:51 crc kubenswrapper[4706]: I1125 12:00:51.990459 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-l4m6r" event={"ID":"9e5a3424-dd89-4411-872f-70447506cf73","Type":"ContainerStarted","Data":"b10ced0f9e57e269f55286fc9787deb23da4c3698fdc69ccd0ae117103966d07"} Nov 25 12:00:51 crc kubenswrapper[4706]: I1125 12:00:51.990746 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-l4m6r" Nov 25 12:00:52 crc kubenswrapper[4706]: I1125 12:00:52.043633 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 25 12:00:52 crc kubenswrapper[4706]: I1125 12:00:52.064230 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 25 12:00:52 crc kubenswrapper[4706]: I1125 12:00:52.073929 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 25 12:00:52 crc kubenswrapper[4706]: I1125 12:00:52.089364 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 25 12:00:52 crc kubenswrapper[4706]: I1125 12:00:52.133357 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 25 12:00:52 crc kubenswrapper[4706]: I1125 12:00:52.134649 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 25 12:00:52 crc kubenswrapper[4706]: I1125 12:00:52.147563 4706 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 25 12:00:52 crc kubenswrapper[4706]: I1125 12:00:52.188897 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 25 12:00:52 crc kubenswrapper[4706]: I1125 12:00:52.294563 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 25 12:00:52 crc kubenswrapper[4706]: I1125 12:00:52.309043 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 25 12:00:52 crc kubenswrapper[4706]: I1125 12:00:52.357579 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 25 12:00:52 crc kubenswrapper[4706]: I1125 12:00:52.358557 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 25 12:00:52 crc kubenswrapper[4706]: I1125 12:00:52.373369 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 25 12:00:52 crc kubenswrapper[4706]: I1125 12:00:52.389977 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 25 12:00:52 crc kubenswrapper[4706]: I1125 12:00:52.394451 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 25 12:00:52 crc kubenswrapper[4706]: I1125 12:00:52.434177 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 25 12:00:52 crc kubenswrapper[4706]: I1125 12:00:52.480521 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 12:00:52 crc kubenswrapper[4706]: I1125 12:00:52.525285 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 25 12:00:52 crc kubenswrapper[4706]: I1125 12:00:52.574201 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-qnhsx" Nov 25 12:00:52 crc kubenswrapper[4706]: I1125 12:00:52.675185 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 25 12:00:52 crc kubenswrapper[4706]: I1125 12:00:52.709727 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 25 12:00:52 crc kubenswrapper[4706]: I1125 12:00:52.765640 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 25 12:00:52 crc kubenswrapper[4706]: I1125 12:00:52.799197 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 25 12:00:52 crc kubenswrapper[4706]: I1125 12:00:52.878164 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 25 12:00:52 crc kubenswrapper[4706]: I1125 12:00:52.886376 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 25 12:00:52 crc kubenswrapper[4706]: I1125 12:00:52.940875 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 25 12:00:52 crc kubenswrapper[4706]: I1125 12:00:52.957254 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 25 12:00:53 crc kubenswrapper[4706]: I1125 12:00:53.001559 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-x9x4q" event={"ID":"5726a389-32eb-4f0c-938b-6f2ddbb762e7","Type":"ContainerStarted","Data":"eba63631d0c24d28e0abc541766c4088d2247f90cf9591d7300840348db21129"} Nov 25 12:00:53 crc kubenswrapper[4706]: I1125 12:00:53.004072 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 25 12:00:53 crc kubenswrapper[4706]: I1125 12:00:53.009661 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 25 12:00:53 crc kubenswrapper[4706]: I1125 12:00:53.031952 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 25 12:00:53 crc kubenswrapper[4706]: I1125 12:00:53.052946 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 25 12:00:53 crc kubenswrapper[4706]: I1125 12:00:53.058190 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 25 12:00:53 crc kubenswrapper[4706]: I1125 12:00:53.093130 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-5789f9b844-cfvkd" Nov 25 12:00:53 crc kubenswrapper[4706]: I1125 12:00:53.143425 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-q2ntn" Nov 25 12:00:53 crc kubenswrapper[4706]: I1125 12:00:53.209441 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 25 12:00:53 crc kubenswrapper[4706]: I1125 12:00:53.213903 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 25 12:00:53 crc kubenswrapper[4706]: I1125 12:00:53.247718 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 25 12:00:53 crc kubenswrapper[4706]: I1125 12:00:53.279918 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 25 12:00:53 crc kubenswrapper[4706]: I1125 12:00:53.283522 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 25 12:00:53 crc kubenswrapper[4706]: I1125 12:00:53.314285 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 25 12:00:53 crc kubenswrapper[4706]: I1125 12:00:53.330877 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 25 12:00:53 crc kubenswrapper[4706]: I1125 12:00:53.358553 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 25 12:00:53 crc kubenswrapper[4706]: I1125 12:00:53.412238 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 25 12:00:53 crc kubenswrapper[4706]: I1125 12:00:53.423898 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 25 12:00:53 crc kubenswrapper[4706]: I1125 12:00:53.439130 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Nov 25 12:00:53 crc kubenswrapper[4706]: I1125 12:00:53.439219 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-bg9qd" Nov 25 12:00:53 crc kubenswrapper[4706]: I1125 12:00:53.515279 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-sklr8" Nov 25 12:00:53 crc kubenswrapper[4706]: I1125 12:00:53.566113 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 25 12:00:53 crc kubenswrapper[4706]: I1125 12:00:53.569832 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 25 12:00:53 crc kubenswrapper[4706]: I1125 12:00:53.587182 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 25 12:00:53 crc kubenswrapper[4706]: I1125 12:00:53.590520 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-lfvgq" Nov 25 12:00:53 crc kubenswrapper[4706]: I1125 12:00:53.644670 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 25 12:00:53 crc kubenswrapper[4706]: I1125 12:00:53.695901 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 25 12:00:53 crc kubenswrapper[4706]: I1125 12:00:53.741972 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 25 12:00:53 crc kubenswrapper[4706]: I1125 12:00:53.748493 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 25 12:00:53 crc kubenswrapper[4706]: I1125 12:00:53.758241 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 25 12:00:53 crc kubenswrapper[4706]: I1125 12:00:53.778962 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 25 12:00:53 crc kubenswrapper[4706]: I1125 12:00:53.834459 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 25 12:00:53 crc kubenswrapper[4706]: I1125 12:00:53.881460 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 25 12:00:53 crc kubenswrapper[4706]: I1125 12:00:53.910195 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 25 12:00:53 crc kubenswrapper[4706]: I1125 12:00:53.925112 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-gnddp" Nov 25 12:00:53 crc kubenswrapper[4706]: I1125 12:00:53.941908 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.030026 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.084561 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-nf5qj" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.117966 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.211448 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.211480 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.244888 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.261367 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.333931 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.337103 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.343243 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.365623 4706 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.366018 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.385314 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.436639 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-bsbgm" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.447943 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.454786 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.513062 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.524311 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.535832 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-8v89s" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.538909 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.545148 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.613453 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.658975 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-jh5hc" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.659749 4706 scope.go:117] "RemoveContainer" containerID="ef7e5f61a61bf7a3cf1b053affdda0bf46af30ce0bda52a6bec7632d6440e6fa" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.682066 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-4bsmv" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.683046 4706 scope.go:117] "RemoveContainer" containerID="e47b18a47a3c07e2621e6d16d464c800ca4775ecfde041d46f44c4816bbb48a8" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.691506 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-5bbq6" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.691750 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hqsp5" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.692445 4706 scope.go:117] "RemoveContainer" containerID="aadf818856cf40cc5bb27311e2a0e5af68a351235bd2ff78ace96e5175dcbaae" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.734758 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-675cp" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.735732 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.766003 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/heat-operator-controller-manager-774b86978c-9bz4f" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.766787 4706 scope.go:117] "RemoveContainer" containerID="53384e10a33d567f69a8ca7eb18df18ae3c2e018916094498dc1e9c70ae6b819" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.773703 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.784790 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.837883 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.868943 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zx4v6" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.869758 4706 scope.go:117] "RemoveContainer" containerID="ef2d657f5558b3ac852d69ea5b513db79b4302184287ea5c0382451833e899ff" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.908143 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.926092 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.927438 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.954628 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.965209 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.965276 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-rfz7f" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.965928 4706 scope.go:117] "RemoveContainer" containerID="0d57fe1921c6d00af0f49dc1ab2240ace7cb30580498b0eb194a6acc0908dbdc" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.976458 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.981787 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.983651 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-nf6gr" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.984245 4706 scope.go:117] "RemoveContainer" containerID="8bcc6c66d2003de20e3894ed5e4c0c7fa24621413e086dd790686ba63d835134" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.997191 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.998520 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 25 12:00:54 crc kubenswrapper[4706]: I1125 12:00:54.998828 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.015070 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-mbbvh" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.046180 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-t6c78" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.046989 4706 scope.go:117] "RemoveContainer" containerID="4f8d05659443c7ea56ca378c2a6695d32450f0bc3c798529e4dab6468c1cb7ce" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.094131 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.106361 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.131757 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.134281 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-fslzs" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.134978 4706 scope.go:117] "RemoveContainer" containerID="74dfeb763e6886a59407a60e645fbd45baddd281cbe2f7f8ee80d31cf7b1d8b3" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.138355 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.153968 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.170556 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-bpcjw" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.171260 4706 scope.go:117] "RemoveContainer" containerID="bd4b32407fc1b555b8978b1e64da816941a645cd5f67a6ce935b7e5ca0e50e13" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.210005 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tfn29" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.210703 4706 scope.go:117] "RemoveContainer" containerID="e0db74fe9e90de1fff19ec89cdc16a0e70b6747ee06ce85ceffd06d4ea07161f" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.214728 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.215849 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k7crl" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.216226 4706 scope.go:117] "RemoveContainer" containerID="5aa2d062bee571f40f50fb1d425672051c914cfcd57df5100254b6b32c8ee09c" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.231003 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.290164 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.317854 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.338259 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-f47gl" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.338982 4706 scope.go:117] "RemoveContainer" containerID="bbcbd5e92b3c8020116399644b123c6a0ecf44834665b167b35151fb974c3f10" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.368061 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.375763 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-2tmzq" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.378198 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.379016 4706 scope.go:117] "RemoveContainer" containerID="382e456c6fbd763bd7078807a9f97276eee0e98a5f9e81429cf721d7d43cbf64" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.390898 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-nc6f7" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.391666 4706 scope.go:117] "RemoveContainer" containerID="45575b580dd3604071bbfe6d7478f5cce4c5b94c9bd593825660f95dafda6d8f" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.394940 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.413009 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.442824 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.500360 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.541761 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rwbvj" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.543000 4706 scope.go:117] "RemoveContainer" containerID="a6a63ee316ee8f6c1c0dc3e603be4df7625b7a40b6eb74aa3998c132daaae571" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.583287 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.589594 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.592342 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-8p5t2" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.593135 4706 scope.go:117] "RemoveContainer" containerID="2f9e63b9b2b55d5cbd2f8d076fdd74fe65c68c1401d54183033d597c9e0ca237" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.618028 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.629156 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.669929 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-qvhvr" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.688840 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.698991 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.743827 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.756054 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/watcher-operator-controller-manager-864885998-9s7hm" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.756860 4706 scope.go:117] "RemoveContainer" containerID="98356e4566939db6aa79c8b5c2952865d0a73175246366956905475dff958f76" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.782463 4706 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.858497 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.871004 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.928737 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.977705 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 25 12:00:55 crc kubenswrapper[4706]: I1125 12:00:55.983799 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.049015 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rwbvj" event={"ID":"a0668604-b184-4265-b9af-fc6f526d8351","Type":"ContainerStarted","Data":"a226fc73c7da52f0ca0a370709984715942c1d8a152b800ea90ead3bb019494f"} Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.049428 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rwbvj" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.061089 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-rfz7f" event={"ID":"e204aa88-c108-491e-9a73-2fca5c2ef15c","Type":"ContainerStarted","Data":"33c20da29fae54c5ffde47af014dcfe1f08c502ad3aea3eba6f326e44d2166ec"} Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.061493 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-rfz7f" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.063825 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-fslzs" event={"ID":"70fa0d16-065a-463f-8198-06a03414a128","Type":"ContainerStarted","Data":"877c174eac586425950124ac571dc6763b1febe9b3ccaa2564cee40ed00515a4"} Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.064070 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-fslzs" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.066135 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-f47gl" event={"ID":"1c035858-a349-4415-8a5d-f3f2edb7c84e","Type":"ContainerStarted","Data":"39276ee2accf20a8f1c770b2a9755d340d706cfb23728cf321e3c9ee28058ac1"} Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.067359 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-f47gl" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.073811 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hqsp5" event={"ID":"9fa65252-7bf5-4e83-beb7-dfcfa63db10d","Type":"ContainerStarted","Data":"65c1bde39926b579d0413b28c0efc62b14314a0e96620f156dc467b1c5574de7"} Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.074040 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hqsp5" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.079545 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-2tmzq" event={"ID":"063b2f44-faa1-4a58-b77b-f2140f569b01","Type":"ContainerStarted","Data":"9f36e82faac665d7f0f5abba5f5340e8cfbd980d3b28d1469df085d7cd9791d0"} Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.080281 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-2tmzq" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.085593 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tfn29" event={"ID":"3c582966-ab32-499d-8f1c-95c942dd6bb4","Type":"ContainerStarted","Data":"4e71f8635b34f697698ad0a571b7bd721be6ef75c9c23f79f4495685713526ca"} Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.085820 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tfn29" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.088806 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-9bz4f" event={"ID":"c6de3b19-c207-4c00-8350-de810fb1f555","Type":"ContainerStarted","Data":"24b8bf2071fb960dc01021f45dc6a6187e726140f1177545c5682ed36509162c"} Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.089509 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-9bz4f" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.093134 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-bpcjw" event={"ID":"62e72e86-38e3-4acc-8aa1-664684f27760","Type":"ContainerStarted","Data":"1c5fe194a4f017ac08d64d8298e3308d6540de905a415cb32a50628361ebd594"} Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.093650 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-bpcjw" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.095887 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-nf6gr" event={"ID":"6c41fff9-feeb-4311-a7ce-7da3a71b3e9c","Type":"ContainerStarted","Data":"c2386a665538a9dcdf4f520f14445a16ddecf128d946689befe9feb275de4b51"} Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.096519 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-nf6gr" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.099848 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-jh5hc" event={"ID":"23155e14-a775-48c5-adf9-55dcfd008040","Type":"ContainerStarted","Data":"34df93559d5d5b5e9145d6addc5655f009d436de4ca8a6d9fa7d3bba5c55e6a6"} Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.100097 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-jh5hc" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.103921 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.105831 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zx4v6" event={"ID":"72bbe536-121d-47c0-b473-2974b238f271","Type":"ContainerStarted","Data":"61e5f7283ed78e29da297f9e3f78ec73f5f78d804ca78e69d73dcc1fd8e84ef4"} Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.106270 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zx4v6" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.110656 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-4bsmv" event={"ID":"ee655c82-6748-4bba-9da4-dcf73e0cff37","Type":"ContainerStarted","Data":"f2a01c922fa43ca9f13627d386fd67e7164831796fb2ff49fe266f6fa7334f54"} Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.110857 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-4bsmv" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.117516 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-t6c78" event={"ID":"4857e509-acac-422c-87e8-2662708da599","Type":"ContainerStarted","Data":"a631b9905d2b997e9ed15a3565bfc1619f1180b90460e26ea319d739c4a6df41"} Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.117782 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-t6c78" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.128064 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-8p5t2" event={"ID":"a7a52f28-6bc4-481d-8513-16dbb7b37ae1","Type":"ContainerStarted","Data":"7a0d7c5bf3616b3baa5702378b5056d3a1c4d41fe340edfb9dff17c55c3d147c"} Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.128600 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-8p5t2" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.132739 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-nc6f7" event={"ID":"61b1ec50-3228-43bc-bb09-d74a7f02be52","Type":"ContainerStarted","Data":"2ebb348719237e4ab26568eec03fabe79cc2155f5818e16eb0d6a3c7a7c38f82"} Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.133043 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-nc6f7" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.140282 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k7crl" event={"ID":"eab1279c-c99a-450e-887b-d246a2ff01aa","Type":"ContainerStarted","Data":"acc90c2da7a91c79ff66cbf8a1633852526b16d47d1a4b0a09239c798f040d9e"} Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.140659 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k7crl" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.169317 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.175829 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.189629 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.195208 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.218964 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.257775 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.342457 4706 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.345292 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.364627 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.366000 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-hcfgv" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.375435 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-sswwc" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.392153 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.405444 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-p74gc" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.450982 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-wf72p" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.452024 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.460597 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.473590 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.508043 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.543828 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.558047 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.558980 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.572453 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.573553 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.573747 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.576572 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.703989 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-5qcxg" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.707591 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.776176 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.875321 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.961619 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.968636 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 25 12:00:56 crc kubenswrapper[4706]: I1125 12:00:56.976954 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.004499 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.028194 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.059904 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.072064 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.106667 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.135621 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.140604 4706 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.153622 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-9s7hm" event={"ID":"6b8e15c0-a70f-4b4c-8836-a2c4e7b23f60","Type":"ContainerStarted","Data":"7d54e908c08a08c5bddf58c73a439487021dd6ec090841dbbe01ba9c76a4e1d9"} Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.161792 4706 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.165560 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.230396 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-cjb6d" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.311458 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.315043 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.363147 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.384285 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.389884 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.428497 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.447704 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.455871 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-lmg22" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.468370 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.473276 4706 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-vdkzz" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.492498 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.504878 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.522171 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.567119 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.570869 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.632361 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.651908 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.656578 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.665173 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.666522 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.670767 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.684326 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.690733 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.721567 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.750452 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.763526 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.832928 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.838401 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.847555 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.900763 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.940118 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 25 12:00:57 crc kubenswrapper[4706]: I1125 12:00:57.940257 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 25 12:00:58 crc kubenswrapper[4706]: I1125 12:00:58.033730 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 25 12:00:58 crc kubenswrapper[4706]: I1125 12:00:58.075782 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 25 12:00:58 crc kubenswrapper[4706]: I1125 12:00:58.145284 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-gts76" Nov 25 12:00:58 crc kubenswrapper[4706]: I1125 12:00:58.162117 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 25 12:00:58 crc kubenswrapper[4706]: I1125 12:00:58.177146 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 25 12:00:58 crc kubenswrapper[4706]: I1125 12:00:58.206862 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 25 12:00:58 crc kubenswrapper[4706]: I1125 12:00:58.210162 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 25 12:00:58 crc kubenswrapper[4706]: I1125 12:00:58.232841 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-fcl7v" Nov 25 12:00:58 crc kubenswrapper[4706]: I1125 12:00:58.286930 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 25 12:00:58 crc kubenswrapper[4706]: I1125 12:00:58.317398 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 25 12:00:58 crc kubenswrapper[4706]: I1125 12:00:58.321497 4706 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 25 12:00:58 crc kubenswrapper[4706]: I1125 12:00:58.345684 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 25 12:00:58 crc kubenswrapper[4706]: I1125 12:00:58.390687 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 25 12:00:58 crc kubenswrapper[4706]: I1125 12:00:58.421177 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 25 12:00:58 crc kubenswrapper[4706]: I1125 12:00:58.427076 4706 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-8v9gh" Nov 25 12:00:58 crc kubenswrapper[4706]: I1125 12:00:58.435440 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 25 12:00:58 crc kubenswrapper[4706]: I1125 12:00:58.472013 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 25 12:00:58 crc kubenswrapper[4706]: I1125 12:00:58.477421 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 25 12:00:58 crc kubenswrapper[4706]: I1125 12:00:58.501762 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 25 12:00:58 crc kubenswrapper[4706]: I1125 12:00:58.510760 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 25 12:00:58 crc kubenswrapper[4706]: I1125 12:00:58.559761 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 25 12:00:58 crc kubenswrapper[4706]: I1125 12:00:58.614493 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-v79vn" Nov 25 12:00:58 crc kubenswrapper[4706]: I1125 12:00:58.643194 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 25 12:00:58 crc kubenswrapper[4706]: I1125 12:00:58.670192 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 25 12:00:58 crc kubenswrapper[4706]: I1125 12:00:58.800620 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 25 12:00:58 crc kubenswrapper[4706]: I1125 12:00:58.800830 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 25 12:00:58 crc kubenswrapper[4706]: I1125 12:00:58.818822 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 25 12:00:58 crc kubenswrapper[4706]: I1125 12:00:58.845406 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 25 12:00:58 crc kubenswrapper[4706]: I1125 12:00:58.846797 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 25 12:00:58 crc kubenswrapper[4706]: I1125 12:00:58.854480 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 25 12:00:58 crc kubenswrapper[4706]: I1125 12:00:58.936589 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 25 12:00:58 crc kubenswrapper[4706]: I1125 12:00:58.989880 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 25 12:00:58 crc kubenswrapper[4706]: I1125 12:00:58.992395 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.038896 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.096786 4706 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.098817 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=65.098799536 podStartE2EDuration="1m5.098799536s" podCreationTimestamp="2025-11-25 11:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:00:37.453355208 +0000 UTC m=+1446.367912589" watchObservedRunningTime="2025-11-25 12:00:59.098799536 +0000 UTC m=+1468.013356927" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.100425 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=64.100419917 podStartE2EDuration="1m4.100419917s" podCreationTimestamp="2025-11-25 11:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:00:37.548717332 +0000 UTC m=+1446.463274713" watchObservedRunningTime="2025-11-25 12:00:59.100419917 +0000 UTC m=+1468.014977298" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.104596 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=44.104588342 podStartE2EDuration="44.104588342s" podCreationTimestamp="2025-11-25 12:00:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:00:37.218158182 +0000 UTC m=+1446.132715573" watchObservedRunningTime="2025-11-25 12:00:59.104588342 +0000 UTC m=+1468.019145723" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.108467 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.108519 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.109785 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.114345 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.116769 4706 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.116823 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.116867 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.117596 4706 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"974036435db73d96e085515bc74bf3f1f8548952748a0b190afc75921a7da26d"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.117692 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://974036435db73d96e085515bc74bf3f1f8548952748a0b190afc75921a7da26d" gracePeriod=30 Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.135711 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.135693806 podStartE2EDuration="22.135693806s" podCreationTimestamp="2025-11-25 12:00:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:00:59.128684299 +0000 UTC m=+1468.043241680" watchObservedRunningTime="2025-11-25 12:00:59.135693806 +0000 UTC m=+1468.050251187" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.145083 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.191436 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.221801 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.229157 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.229971 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.236699 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.255319 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.264810 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.279624 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.347988 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.358901 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.391209 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-manager-9cb9fb586-5854z" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.391969 4706 scope.go:117] "RemoveContainer" containerID="61f3af2c32e758c04c0727c9990134586c7e8ecac7c2bc6b783202602f918a79" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.400077 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.449853 4706 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.495575 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.534726 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-wfhgp" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.549279 4706 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.570283 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.594589 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.686327 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.689136 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.706945 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.761080 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.764997 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.766643 4706 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-79wv8" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.771145 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.814006 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.815319 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.847151 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.871339 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-sh56x" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.922879 4706 scope.go:117] "RemoveContainer" containerID="0c8124275bdfdf469c0e067b64968e66e892c3e8a689b45338d017de75edaab8" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.922947 4706 scope.go:117] "RemoveContainer" containerID="b2110b017c561be6a8594dfbd82ff8886504d9605fbdb38f1ae9c06d61eaa857" Nov 25 12:00:59 crc kubenswrapper[4706]: E1125 12:00:59.923173 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=metallb-operator-controller-manager-7d76b4f6c7-xxkgj_metallb-system(cdb2d830-fbc9-4336-83b7-0392051670cb)\"" pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" podUID="cdb2d830-fbc9-4336-83b7-0392051670cb" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.954153 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.958384 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.994313 4706 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 12:00:59 crc kubenswrapper[4706]: I1125 12:00:59.994512 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://b881318ecf37c6c0877dc5bf960a14691cdc03852068e3d3e7e470ddb4562aa3" gracePeriod=5 Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.004703 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.080814 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.124058 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.132590 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.154460 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.181770 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-9cb9fb586-5854z" event={"ID":"2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1","Type":"ContainerStarted","Data":"a73fdbde0791501778e6323a3cd41de1abd1045703729b7c078574323ad0a2b7"} Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.183210 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-9cb9fb586-5854z" Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.196419 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.203888 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.204712 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.220762 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.274941 4706 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-nhh4t" Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.282530 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.291752 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.362319 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.415333 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-ncwsm" Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.447797 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-kpx5g" Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.501692 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.502882 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.516107 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.538590 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.561825 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.575897 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.622732 4706 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.640530 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-dxkkg" Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.760068 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.800245 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.817090 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.820842 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.830829 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.835246 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.878102 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.908763 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.912286 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.926377 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.936987 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.937234 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-q944t" Nov 25 12:01:00 crc kubenswrapper[4706]: I1125 12:01:00.970890 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 25 12:01:01 crc kubenswrapper[4706]: I1125 12:01:01.108740 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-sg9ch" Nov 25 12:01:01 crc kubenswrapper[4706]: I1125 12:01:01.166064 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-xmlmw" Nov 25 12:01:01 crc kubenswrapper[4706]: I1125 12:01:01.174098 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 25 12:01:01 crc kubenswrapper[4706]: I1125 12:01:01.189820 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 25 12:01:01 crc kubenswrapper[4706]: I1125 12:01:01.196080 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"04e7a5d0-b5fe-4a58-b015-339cc1218c6e","Type":"ContainerStarted","Data":"36810a371a3219e171c7278892b8f60837cb5da07a11769b24989283b23c6c3b"} Nov 25 12:01:01 crc kubenswrapper[4706]: I1125 12:01:01.196991 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 12:01:01 crc kubenswrapper[4706]: I1125 12:01:01.253664 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 25 12:01:01 crc kubenswrapper[4706]: I1125 12:01:01.274998 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 25 12:01:01 crc kubenswrapper[4706]: I1125 12:01:01.404474 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 25 12:01:01 crc kubenswrapper[4706]: I1125 12:01:01.453600 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 25 12:01:01 crc kubenswrapper[4706]: I1125 12:01:01.468410 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-zwggv" Nov 25 12:01:01 crc kubenswrapper[4706]: I1125 12:01:01.548259 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-hzz89" Nov 25 12:01:01 crc kubenswrapper[4706]: I1125 12:01:01.551064 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 25 12:01:01 crc kubenswrapper[4706]: I1125 12:01:01.687727 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 25 12:01:01 crc kubenswrapper[4706]: I1125 12:01:01.688122 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 25 12:01:01 crc kubenswrapper[4706]: I1125 12:01:01.780271 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 25 12:01:02 crc kubenswrapper[4706]: I1125 12:01:02.020520 4706 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 25 12:01:02 crc kubenswrapper[4706]: I1125 12:01:02.181786 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 25 12:01:02 crc kubenswrapper[4706]: I1125 12:01:02.226502 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 25 12:01:02 crc kubenswrapper[4706]: I1125 12:01:02.241434 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-n4npr" Nov 25 12:01:02 crc kubenswrapper[4706]: I1125 12:01:02.273932 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 25 12:01:02 crc kubenswrapper[4706]: I1125 12:01:02.344807 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 25 12:01:02 crc kubenswrapper[4706]: I1125 12:01:02.378198 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 25 12:01:02 crc kubenswrapper[4706]: I1125 12:01:02.426779 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-ktrdc" Nov 25 12:01:02 crc kubenswrapper[4706]: I1125 12:01:02.493095 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 25 12:01:02 crc kubenswrapper[4706]: I1125 12:01:02.599411 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 25 12:01:02 crc kubenswrapper[4706]: I1125 12:01:02.606127 4706 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-4whb8" Nov 25 12:01:02 crc kubenswrapper[4706]: I1125 12:01:02.692606 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 25 12:01:02 crc kubenswrapper[4706]: I1125 12:01:02.705234 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 25 12:01:02 crc kubenswrapper[4706]: I1125 12:01:02.882043 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 25 12:01:02 crc kubenswrapper[4706]: I1125 12:01:02.886997 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 25 12:01:03 crc kubenswrapper[4706]: I1125 12:01:03.061148 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 25 12:01:03 crc kubenswrapper[4706]: I1125 12:01:03.132848 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 25 12:01:03 crc kubenswrapper[4706]: I1125 12:01:03.257770 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 25 12:01:03 crc kubenswrapper[4706]: I1125 12:01:03.270202 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 25 12:01:03 crc kubenswrapper[4706]: I1125 12:01:03.316612 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-lblxg" Nov 25 12:01:03 crc kubenswrapper[4706]: I1125 12:01:03.589045 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-7bdcv" Nov 25 12:01:03 crc kubenswrapper[4706]: I1125 12:01:03.623536 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 25 12:01:03 crc kubenswrapper[4706]: I1125 12:01:03.668978 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 25 12:01:03 crc kubenswrapper[4706]: I1125 12:01:03.688752 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 25 12:01:03 crc kubenswrapper[4706]: I1125 12:01:03.737817 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 25 12:01:04 crc kubenswrapper[4706]: I1125 12:01:04.662123 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-jh5hc" Nov 25 12:01:04 crc kubenswrapper[4706]: I1125 12:01:04.684371 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-4bsmv" Nov 25 12:01:04 crc kubenswrapper[4706]: I1125 12:01:04.692458 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hqsp5" Nov 25 12:01:04 crc kubenswrapper[4706]: I1125 12:01:04.716478 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-5qhcc" Nov 25 12:01:04 crc kubenswrapper[4706]: I1125 12:01:04.770638 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-774b86978c-9bz4f" Nov 25 12:01:04 crc kubenswrapper[4706]: I1125 12:01:04.865051 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zx4v6" Nov 25 12:01:04 crc kubenswrapper[4706]: I1125 12:01:04.972803 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-rfz7f" Nov 25 12:01:04 crc kubenswrapper[4706]: I1125 12:01:04.986748 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-nf6gr" Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.048499 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-t6c78" Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.055448 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-l4m6r" Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.140706 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-fslzs" Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.171205 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-bpcjw" Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.211847 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-tfn29" Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.217213 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k7crl" Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.235894 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.235947 4706 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="b881318ecf37c6c0877dc5bf960a14691cdc03852068e3d3e7e470ddb4562aa3" exitCode=137 Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.341263 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-f47gl" Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.379242 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-2tmzq" Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.393625 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-nc6f7" Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.544585 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rwbvj" Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.598661 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-8p5t2" Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.639258 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.639345 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.756754 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-864885998-9s7hm" Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.759148 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-864885998-9s7hm" Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.766574 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.766680 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.766727 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.766836 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.766888 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.767286 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.767316 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.767357 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.767376 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.767774 4706 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.767793 4706 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.767807 4706 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.767819 4706 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.777729 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.869975 4706 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.932838 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.933168 4706 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.948508 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.948583 4706 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="6fc9a046-1588-46d3-ab6d-a1786c3cd9ef" Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.958129 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 12:01:05 crc kubenswrapper[4706]: I1125 12:01:05.958172 4706 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="6fc9a046-1588-46d3-ab6d-a1786c3cd9ef" Nov 25 12:01:06 crc kubenswrapper[4706]: I1125 12:01:06.246162 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 25 12:01:06 crc kubenswrapper[4706]: I1125 12:01:06.246285 4706 scope.go:117] "RemoveContainer" containerID="b881318ecf37c6c0877dc5bf960a14691cdc03852068e3d3e7e470ddb4562aa3" Nov 25 12:01:06 crc kubenswrapper[4706]: I1125 12:01:06.246329 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:01:07 crc kubenswrapper[4706]: I1125 12:01:07.852903 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 25 12:01:09 crc kubenswrapper[4706]: I1125 12:01:09.396600 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-9cb9fb586-5854z" Nov 25 12:01:13 crc kubenswrapper[4706]: I1125 12:01:13.922473 4706 scope.go:117] "RemoveContainer" containerID="0c8124275bdfdf469c0e067b64968e66e892c3e8a689b45338d017de75edaab8" Nov 25 12:01:14 crc kubenswrapper[4706]: I1125 12:01:14.327009 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" event={"ID":"cdb2d830-fbc9-4336-83b7-0392051670cb","Type":"ContainerStarted","Data":"64ff3f671637fe9857cee059b199afd3b2792a14580304e0826616414ca7f9f5"} Nov 25 12:01:14 crc kubenswrapper[4706]: I1125 12:01:14.327529 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" Nov 25 12:01:17 crc kubenswrapper[4706]: I1125 12:01:17.493842 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n2sps"] Nov 25 12:01:17 crc kubenswrapper[4706]: E1125 12:01:17.494579 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 25 12:01:17 crc kubenswrapper[4706]: I1125 12:01:17.494596 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 25 12:01:17 crc kubenswrapper[4706]: E1125 12:01:17.494644 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2b01a11-ff6e-4718-9622-3cba2728d492" containerName="installer" Nov 25 12:01:17 crc kubenswrapper[4706]: I1125 12:01:17.494656 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2b01a11-ff6e-4718-9622-3cba2728d492" containerName="installer" Nov 25 12:01:17 crc kubenswrapper[4706]: I1125 12:01:17.494857 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2b01a11-ff6e-4718-9622-3cba2728d492" containerName="installer" Nov 25 12:01:17 crc kubenswrapper[4706]: I1125 12:01:17.494882 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 25 12:01:17 crc kubenswrapper[4706]: I1125 12:01:17.496246 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n2sps" Nov 25 12:01:17 crc kubenswrapper[4706]: I1125 12:01:17.510799 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n2sps"] Nov 25 12:01:17 crc kubenswrapper[4706]: I1125 12:01:17.625411 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhwf5\" (UniqueName: \"kubernetes.io/projected/ae9cc76a-5456-4d78-a95d-938272a5e895-kube-api-access-rhwf5\") pod \"redhat-marketplace-n2sps\" (UID: \"ae9cc76a-5456-4d78-a95d-938272a5e895\") " pod="openshift-marketplace/redhat-marketplace-n2sps" Nov 25 12:01:17 crc kubenswrapper[4706]: I1125 12:01:17.625455 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae9cc76a-5456-4d78-a95d-938272a5e895-catalog-content\") pod \"redhat-marketplace-n2sps\" (UID: \"ae9cc76a-5456-4d78-a95d-938272a5e895\") " pod="openshift-marketplace/redhat-marketplace-n2sps" Nov 25 12:01:17 crc kubenswrapper[4706]: I1125 12:01:17.625519 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae9cc76a-5456-4d78-a95d-938272a5e895-utilities\") pod \"redhat-marketplace-n2sps\" (UID: \"ae9cc76a-5456-4d78-a95d-938272a5e895\") " pod="openshift-marketplace/redhat-marketplace-n2sps" Nov 25 12:01:17 crc kubenswrapper[4706]: I1125 12:01:17.726820 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhwf5\" (UniqueName: \"kubernetes.io/projected/ae9cc76a-5456-4d78-a95d-938272a5e895-kube-api-access-rhwf5\") pod \"redhat-marketplace-n2sps\" (UID: \"ae9cc76a-5456-4d78-a95d-938272a5e895\") " pod="openshift-marketplace/redhat-marketplace-n2sps" Nov 25 12:01:17 crc kubenswrapper[4706]: I1125 12:01:17.726874 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae9cc76a-5456-4d78-a95d-938272a5e895-catalog-content\") pod \"redhat-marketplace-n2sps\" (UID: \"ae9cc76a-5456-4d78-a95d-938272a5e895\") " pod="openshift-marketplace/redhat-marketplace-n2sps" Nov 25 12:01:17 crc kubenswrapper[4706]: I1125 12:01:17.726939 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae9cc76a-5456-4d78-a95d-938272a5e895-utilities\") pod \"redhat-marketplace-n2sps\" (UID: \"ae9cc76a-5456-4d78-a95d-938272a5e895\") " pod="openshift-marketplace/redhat-marketplace-n2sps" Nov 25 12:01:17 crc kubenswrapper[4706]: I1125 12:01:17.727433 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae9cc76a-5456-4d78-a95d-938272a5e895-utilities\") pod \"redhat-marketplace-n2sps\" (UID: \"ae9cc76a-5456-4d78-a95d-938272a5e895\") " pod="openshift-marketplace/redhat-marketplace-n2sps" Nov 25 12:01:17 crc kubenswrapper[4706]: I1125 12:01:17.727667 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae9cc76a-5456-4d78-a95d-938272a5e895-catalog-content\") pod \"redhat-marketplace-n2sps\" (UID: \"ae9cc76a-5456-4d78-a95d-938272a5e895\") " pod="openshift-marketplace/redhat-marketplace-n2sps" Nov 25 12:01:17 crc kubenswrapper[4706]: I1125 12:01:17.758415 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhwf5\" (UniqueName: \"kubernetes.io/projected/ae9cc76a-5456-4d78-a95d-938272a5e895-kube-api-access-rhwf5\") pod \"redhat-marketplace-n2sps\" (UID: \"ae9cc76a-5456-4d78-a95d-938272a5e895\") " pod="openshift-marketplace/redhat-marketplace-n2sps" Nov 25 12:01:17 crc kubenswrapper[4706]: I1125 12:01:17.818234 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n2sps" Nov 25 12:01:18 crc kubenswrapper[4706]: I1125 12:01:18.247912 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n2sps"] Nov 25 12:01:18 crc kubenswrapper[4706]: W1125 12:01:18.251780 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae9cc76a_5456_4d78_a95d_938272a5e895.slice/crio-8c2a4826275d34495aa86d3fc638f95ec8417373fc1ee9c0ed4f71ba8f62b87c WatchSource:0}: Error finding container 8c2a4826275d34495aa86d3fc638f95ec8417373fc1ee9c0ed4f71ba8f62b87c: Status 404 returned error can't find the container with id 8c2a4826275d34495aa86d3fc638f95ec8417373fc1ee9c0ed4f71ba8f62b87c Nov 25 12:01:18 crc kubenswrapper[4706]: I1125 12:01:18.380092 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n2sps" event={"ID":"ae9cc76a-5456-4d78-a95d-938272a5e895","Type":"ContainerStarted","Data":"8c2a4826275d34495aa86d3fc638f95ec8417373fc1ee9c0ed4f71ba8f62b87c"} Nov 25 12:01:19 crc kubenswrapper[4706]: I1125 12:01:19.393788 4706 generic.go:334] "Generic (PLEG): container finished" podID="ae9cc76a-5456-4d78-a95d-938272a5e895" containerID="f7fd9ade3c08a185e79942da706c746f96a37a2332f0d9d19c80578b3bef3cc9" exitCode=0 Nov 25 12:01:19 crc kubenswrapper[4706]: I1125 12:01:19.393839 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n2sps" event={"ID":"ae9cc76a-5456-4d78-a95d-938272a5e895","Type":"ContainerDied","Data":"f7fd9ade3c08a185e79942da706c746f96a37a2332f0d9d19c80578b3bef3cc9"} Nov 25 12:01:19 crc kubenswrapper[4706]: I1125 12:01:19.958370 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xlpt2"] Nov 25 12:01:19 crc kubenswrapper[4706]: I1125 12:01:19.963166 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xlpt2" Nov 25 12:01:19 crc kubenswrapper[4706]: I1125 12:01:19.973224 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xlpt2"] Nov 25 12:01:20 crc kubenswrapper[4706]: I1125 12:01:20.072541 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs2k2\" (UniqueName: \"kubernetes.io/projected/e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6-kube-api-access-qs2k2\") pod \"certified-operators-xlpt2\" (UID: \"e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6\") " pod="openshift-marketplace/certified-operators-xlpt2" Nov 25 12:01:20 crc kubenswrapper[4706]: I1125 12:01:20.072695 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6-catalog-content\") pod \"certified-operators-xlpt2\" (UID: \"e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6\") " pod="openshift-marketplace/certified-operators-xlpt2" Nov 25 12:01:20 crc kubenswrapper[4706]: I1125 12:01:20.072807 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6-utilities\") pod \"certified-operators-xlpt2\" (UID: \"e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6\") " pod="openshift-marketplace/certified-operators-xlpt2" Nov 25 12:01:20 crc kubenswrapper[4706]: I1125 12:01:20.174704 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6-utilities\") pod \"certified-operators-xlpt2\" (UID: \"e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6\") " pod="openshift-marketplace/certified-operators-xlpt2" Nov 25 12:01:20 crc kubenswrapper[4706]: I1125 12:01:20.174758 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs2k2\" (UniqueName: \"kubernetes.io/projected/e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6-kube-api-access-qs2k2\") pod \"certified-operators-xlpt2\" (UID: \"e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6\") " pod="openshift-marketplace/certified-operators-xlpt2" Nov 25 12:01:20 crc kubenswrapper[4706]: I1125 12:01:20.174837 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6-catalog-content\") pod \"certified-operators-xlpt2\" (UID: \"e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6\") " pod="openshift-marketplace/certified-operators-xlpt2" Nov 25 12:01:20 crc kubenswrapper[4706]: I1125 12:01:20.175227 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6-utilities\") pod \"certified-operators-xlpt2\" (UID: \"e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6\") " pod="openshift-marketplace/certified-operators-xlpt2" Nov 25 12:01:20 crc kubenswrapper[4706]: I1125 12:01:20.175328 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6-catalog-content\") pod \"certified-operators-xlpt2\" (UID: \"e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6\") " pod="openshift-marketplace/certified-operators-xlpt2" Nov 25 12:01:20 crc kubenswrapper[4706]: I1125 12:01:20.193566 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs2k2\" (UniqueName: \"kubernetes.io/projected/e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6-kube-api-access-qs2k2\") pod \"certified-operators-xlpt2\" (UID: \"e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6\") " pod="openshift-marketplace/certified-operators-xlpt2" Nov 25 12:01:20 crc kubenswrapper[4706]: I1125 12:01:20.290717 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xlpt2" Nov 25 12:01:20 crc kubenswrapper[4706]: I1125 12:01:20.448173 4706 generic.go:334] "Generic (PLEG): container finished" podID="ae9cc76a-5456-4d78-a95d-938272a5e895" containerID="3440bad00369ce9656f8065f327e6c9e101cdc5cc1fa945df53204627c2baf15" exitCode=0 Nov 25 12:01:20 crc kubenswrapper[4706]: I1125 12:01:20.448220 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n2sps" event={"ID":"ae9cc76a-5456-4d78-a95d-938272a5e895","Type":"ContainerDied","Data":"3440bad00369ce9656f8065f327e6c9e101cdc5cc1fa945df53204627c2baf15"} Nov 25 12:01:20 crc kubenswrapper[4706]: I1125 12:01:20.765109 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xlpt2"] Nov 25 12:01:20 crc kubenswrapper[4706]: W1125 12:01:20.767223 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode32f7255_77b8_4ef8_b0b1_f83e70d9f3f6.slice/crio-36a7d006bde5480b1512a94e5e9e3d705bdc3d6af0a28d1b4b818b8aadda8d1b WatchSource:0}: Error finding container 36a7d006bde5480b1512a94e5e9e3d705bdc3d6af0a28d1b4b818b8aadda8d1b: Status 404 returned error can't find the container with id 36a7d006bde5480b1512a94e5e9e3d705bdc3d6af0a28d1b4b818b8aadda8d1b Nov 25 12:01:21 crc kubenswrapper[4706]: I1125 12:01:21.465933 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n2sps" event={"ID":"ae9cc76a-5456-4d78-a95d-938272a5e895","Type":"ContainerStarted","Data":"23a58faf46254cb1c8797ce5d9a8fe720331427282e830fcc9da4c1ba0ff6759"} Nov 25 12:01:21 crc kubenswrapper[4706]: I1125 12:01:21.472478 4706 generic.go:334] "Generic (PLEG): container finished" podID="e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6" containerID="55a1aa231467684f0db44c3fb6a2012229970fd7f2a0aa9fef27da80e7e034b8" exitCode=0 Nov 25 12:01:21 crc kubenswrapper[4706]: I1125 12:01:21.472522 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xlpt2" event={"ID":"e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6","Type":"ContainerDied","Data":"55a1aa231467684f0db44c3fb6a2012229970fd7f2a0aa9fef27da80e7e034b8"} Nov 25 12:01:21 crc kubenswrapper[4706]: I1125 12:01:21.472550 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xlpt2" event={"ID":"e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6","Type":"ContainerStarted","Data":"36a7d006bde5480b1512a94e5e9e3d705bdc3d6af0a28d1b4b818b8aadda8d1b"} Nov 25 12:01:21 crc kubenswrapper[4706]: I1125 12:01:21.495558 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n2sps" podStartSLOduration=3.00473396 podStartE2EDuration="4.495520527s" podCreationTimestamp="2025-11-25 12:01:17 +0000 UTC" firstStartedPulling="2025-11-25 12:01:19.395774345 +0000 UTC m=+1488.310331736" lastFinishedPulling="2025-11-25 12:01:20.886560932 +0000 UTC m=+1489.801118303" observedRunningTime="2025-11-25 12:01:21.49525864 +0000 UTC m=+1490.409816021" watchObservedRunningTime="2025-11-25 12:01:21.495520527 +0000 UTC m=+1490.410077908" Nov 25 12:01:22 crc kubenswrapper[4706]: I1125 12:01:22.484825 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xlpt2" event={"ID":"e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6","Type":"ContainerStarted","Data":"447b0d42ca10aada0b8d99755d758bebc001e8b324bf6b11db807053e82db634"} Nov 25 12:01:23 crc kubenswrapper[4706]: I1125 12:01:23.282549 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jmbgx"] Nov 25 12:01:23 crc kubenswrapper[4706]: I1125 12:01:23.286283 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jmbgx" Nov 25 12:01:23 crc kubenswrapper[4706]: I1125 12:01:23.305368 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jmbgx"] Nov 25 12:01:23 crc kubenswrapper[4706]: I1125 12:01:23.441794 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae8172ec-5a1c-40ce-a6c3-49614eebf1ef-catalog-content\") pod \"community-operators-jmbgx\" (UID: \"ae8172ec-5a1c-40ce-a6c3-49614eebf1ef\") " pod="openshift-marketplace/community-operators-jmbgx" Nov 25 12:01:23 crc kubenswrapper[4706]: I1125 12:01:23.441851 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae8172ec-5a1c-40ce-a6c3-49614eebf1ef-utilities\") pod \"community-operators-jmbgx\" (UID: \"ae8172ec-5a1c-40ce-a6c3-49614eebf1ef\") " pod="openshift-marketplace/community-operators-jmbgx" Nov 25 12:01:23 crc kubenswrapper[4706]: I1125 12:01:23.441932 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82fp9\" (UniqueName: \"kubernetes.io/projected/ae8172ec-5a1c-40ce-a6c3-49614eebf1ef-kube-api-access-82fp9\") pod \"community-operators-jmbgx\" (UID: \"ae8172ec-5a1c-40ce-a6c3-49614eebf1ef\") " pod="openshift-marketplace/community-operators-jmbgx" Nov 25 12:01:23 crc kubenswrapper[4706]: I1125 12:01:23.494057 4706 generic.go:334] "Generic (PLEG): container finished" podID="e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6" containerID="447b0d42ca10aada0b8d99755d758bebc001e8b324bf6b11db807053e82db634" exitCode=0 Nov 25 12:01:23 crc kubenswrapper[4706]: I1125 12:01:23.494097 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xlpt2" event={"ID":"e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6","Type":"ContainerDied","Data":"447b0d42ca10aada0b8d99755d758bebc001e8b324bf6b11db807053e82db634"} Nov 25 12:01:23 crc kubenswrapper[4706]: I1125 12:01:23.543008 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82fp9\" (UniqueName: \"kubernetes.io/projected/ae8172ec-5a1c-40ce-a6c3-49614eebf1ef-kube-api-access-82fp9\") pod \"community-operators-jmbgx\" (UID: \"ae8172ec-5a1c-40ce-a6c3-49614eebf1ef\") " pod="openshift-marketplace/community-operators-jmbgx" Nov 25 12:01:23 crc kubenswrapper[4706]: I1125 12:01:23.543129 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae8172ec-5a1c-40ce-a6c3-49614eebf1ef-catalog-content\") pod \"community-operators-jmbgx\" (UID: \"ae8172ec-5a1c-40ce-a6c3-49614eebf1ef\") " pod="openshift-marketplace/community-operators-jmbgx" Nov 25 12:01:23 crc kubenswrapper[4706]: I1125 12:01:23.543157 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae8172ec-5a1c-40ce-a6c3-49614eebf1ef-utilities\") pod \"community-operators-jmbgx\" (UID: \"ae8172ec-5a1c-40ce-a6c3-49614eebf1ef\") " pod="openshift-marketplace/community-operators-jmbgx" Nov 25 12:01:23 crc kubenswrapper[4706]: I1125 12:01:23.543743 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae8172ec-5a1c-40ce-a6c3-49614eebf1ef-catalog-content\") pod \"community-operators-jmbgx\" (UID: \"ae8172ec-5a1c-40ce-a6c3-49614eebf1ef\") " pod="openshift-marketplace/community-operators-jmbgx" Nov 25 12:01:23 crc kubenswrapper[4706]: I1125 12:01:23.543749 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae8172ec-5a1c-40ce-a6c3-49614eebf1ef-utilities\") pod \"community-operators-jmbgx\" (UID: \"ae8172ec-5a1c-40ce-a6c3-49614eebf1ef\") " pod="openshift-marketplace/community-operators-jmbgx" Nov 25 12:01:23 crc kubenswrapper[4706]: I1125 12:01:23.563371 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82fp9\" (UniqueName: \"kubernetes.io/projected/ae8172ec-5a1c-40ce-a6c3-49614eebf1ef-kube-api-access-82fp9\") pod \"community-operators-jmbgx\" (UID: \"ae8172ec-5a1c-40ce-a6c3-49614eebf1ef\") " pod="openshift-marketplace/community-operators-jmbgx" Nov 25 12:01:23 crc kubenswrapper[4706]: I1125 12:01:23.618641 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jmbgx" Nov 25 12:01:24 crc kubenswrapper[4706]: W1125 12:01:24.167267 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae8172ec_5a1c_40ce_a6c3_49614eebf1ef.slice/crio-4f6636724a11684ffe2d5cec9ef94b75bba20a2719734d16a7e6d84ec19b7002 WatchSource:0}: Error finding container 4f6636724a11684ffe2d5cec9ef94b75bba20a2719734d16a7e6d84ec19b7002: Status 404 returned error can't find the container with id 4f6636724a11684ffe2d5cec9ef94b75bba20a2719734d16a7e6d84ec19b7002 Nov 25 12:01:24 crc kubenswrapper[4706]: I1125 12:01:24.174733 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jmbgx"] Nov 25 12:01:24 crc kubenswrapper[4706]: I1125 12:01:24.509121 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xlpt2" event={"ID":"e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6","Type":"ContainerStarted","Data":"da8d4d0b0a1a7896a32ccb9d96839ed7ae4497f43335dcef607d0390f5f20b93"} Nov 25 12:01:24 crc kubenswrapper[4706]: I1125 12:01:24.511518 4706 generic.go:334] "Generic (PLEG): container finished" podID="ae8172ec-5a1c-40ce-a6c3-49614eebf1ef" containerID="a9b402576daf2b6e7bba96069f94edfcfa6b73c2e71539e3f2457baf05fe8775" exitCode=0 Nov 25 12:01:24 crc kubenswrapper[4706]: I1125 12:01:24.511567 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmbgx" event={"ID":"ae8172ec-5a1c-40ce-a6c3-49614eebf1ef","Type":"ContainerDied","Data":"a9b402576daf2b6e7bba96069f94edfcfa6b73c2e71539e3f2457baf05fe8775"} Nov 25 12:01:24 crc kubenswrapper[4706]: I1125 12:01:24.511594 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmbgx" event={"ID":"ae8172ec-5a1c-40ce-a6c3-49614eebf1ef","Type":"ContainerStarted","Data":"4f6636724a11684ffe2d5cec9ef94b75bba20a2719734d16a7e6d84ec19b7002"} Nov 25 12:01:24 crc kubenswrapper[4706]: I1125 12:01:24.529106 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xlpt2" podStartSLOduration=3.130765795 podStartE2EDuration="5.52908705s" podCreationTimestamp="2025-11-25 12:01:19 +0000 UTC" firstStartedPulling="2025-11-25 12:01:21.474274962 +0000 UTC m=+1490.388832343" lastFinishedPulling="2025-11-25 12:01:23.872596217 +0000 UTC m=+1492.787153598" observedRunningTime="2025-11-25 12:01:24.527166542 +0000 UTC m=+1493.441723933" watchObservedRunningTime="2025-11-25 12:01:24.52908705 +0000 UTC m=+1493.443644431" Nov 25 12:01:25 crc kubenswrapper[4706]: I1125 12:01:25.674927 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xqn5d"] Nov 25 12:01:25 crc kubenswrapper[4706]: I1125 12:01:25.678136 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xqn5d" Nov 25 12:01:25 crc kubenswrapper[4706]: I1125 12:01:25.686573 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xqn5d"] Nov 25 12:01:25 crc kubenswrapper[4706]: I1125 12:01:25.813807 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aee9d90c-4042-4e66-9535-cbc14bc710ec-utilities\") pod \"redhat-operators-xqn5d\" (UID: \"aee9d90c-4042-4e66-9535-cbc14bc710ec\") " pod="openshift-marketplace/redhat-operators-xqn5d" Nov 25 12:01:25 crc kubenswrapper[4706]: I1125 12:01:25.814095 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j98t6\" (UniqueName: \"kubernetes.io/projected/aee9d90c-4042-4e66-9535-cbc14bc710ec-kube-api-access-j98t6\") pod \"redhat-operators-xqn5d\" (UID: \"aee9d90c-4042-4e66-9535-cbc14bc710ec\") " pod="openshift-marketplace/redhat-operators-xqn5d" Nov 25 12:01:25 crc kubenswrapper[4706]: I1125 12:01:25.814201 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aee9d90c-4042-4e66-9535-cbc14bc710ec-catalog-content\") pod \"redhat-operators-xqn5d\" (UID: \"aee9d90c-4042-4e66-9535-cbc14bc710ec\") " pod="openshift-marketplace/redhat-operators-xqn5d" Nov 25 12:01:25 crc kubenswrapper[4706]: I1125 12:01:25.916190 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aee9d90c-4042-4e66-9535-cbc14bc710ec-utilities\") pod \"redhat-operators-xqn5d\" (UID: \"aee9d90c-4042-4e66-9535-cbc14bc710ec\") " pod="openshift-marketplace/redhat-operators-xqn5d" Nov 25 12:01:25 crc kubenswrapper[4706]: I1125 12:01:25.916285 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j98t6\" (UniqueName: \"kubernetes.io/projected/aee9d90c-4042-4e66-9535-cbc14bc710ec-kube-api-access-j98t6\") pod \"redhat-operators-xqn5d\" (UID: \"aee9d90c-4042-4e66-9535-cbc14bc710ec\") " pod="openshift-marketplace/redhat-operators-xqn5d" Nov 25 12:01:25 crc kubenswrapper[4706]: I1125 12:01:25.916352 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aee9d90c-4042-4e66-9535-cbc14bc710ec-catalog-content\") pod \"redhat-operators-xqn5d\" (UID: \"aee9d90c-4042-4e66-9535-cbc14bc710ec\") " pod="openshift-marketplace/redhat-operators-xqn5d" Nov 25 12:01:25 crc kubenswrapper[4706]: I1125 12:01:25.916815 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aee9d90c-4042-4e66-9535-cbc14bc710ec-utilities\") pod \"redhat-operators-xqn5d\" (UID: \"aee9d90c-4042-4e66-9535-cbc14bc710ec\") " pod="openshift-marketplace/redhat-operators-xqn5d" Nov 25 12:01:25 crc kubenswrapper[4706]: I1125 12:01:25.917194 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aee9d90c-4042-4e66-9535-cbc14bc710ec-catalog-content\") pod \"redhat-operators-xqn5d\" (UID: \"aee9d90c-4042-4e66-9535-cbc14bc710ec\") " pod="openshift-marketplace/redhat-operators-xqn5d" Nov 25 12:01:25 crc kubenswrapper[4706]: I1125 12:01:25.962000 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j98t6\" (UniqueName: \"kubernetes.io/projected/aee9d90c-4042-4e66-9535-cbc14bc710ec-kube-api-access-j98t6\") pod \"redhat-operators-xqn5d\" (UID: \"aee9d90c-4042-4e66-9535-cbc14bc710ec\") " pod="openshift-marketplace/redhat-operators-xqn5d" Nov 25 12:01:25 crc kubenswrapper[4706]: I1125 12:01:25.992909 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xqn5d" Nov 25 12:01:26 crc kubenswrapper[4706]: I1125 12:01:26.548203 4706 generic.go:334] "Generic (PLEG): container finished" podID="ae8172ec-5a1c-40ce-a6c3-49614eebf1ef" containerID="367788d0a7debc0b3b390b81690a0c1599cd39021377866d54eb1088bc522715" exitCode=0 Nov 25 12:01:26 crc kubenswrapper[4706]: I1125 12:01:26.548284 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmbgx" event={"ID":"ae8172ec-5a1c-40ce-a6c3-49614eebf1ef","Type":"ContainerDied","Data":"367788d0a7debc0b3b390b81690a0c1599cd39021377866d54eb1088bc522715"} Nov 25 12:01:26 crc kubenswrapper[4706]: I1125 12:01:26.596202 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xqn5d"] Nov 25 12:01:27 crc kubenswrapper[4706]: I1125 12:01:27.565557 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmbgx" event={"ID":"ae8172ec-5a1c-40ce-a6c3-49614eebf1ef","Type":"ContainerStarted","Data":"244f96ba61607b9fb5d5395658555520b6de7dfd30794c37618ae5f5e892c840"} Nov 25 12:01:27 crc kubenswrapper[4706]: I1125 12:01:27.568937 4706 generic.go:334] "Generic (PLEG): container finished" podID="aee9d90c-4042-4e66-9535-cbc14bc710ec" containerID="0be681322f62239c2dd15bba4134fb598ff81dc6202ca37487612e18681a10cf" exitCode=0 Nov 25 12:01:27 crc kubenswrapper[4706]: I1125 12:01:27.568991 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqn5d" event={"ID":"aee9d90c-4042-4e66-9535-cbc14bc710ec","Type":"ContainerDied","Data":"0be681322f62239c2dd15bba4134fb598ff81dc6202ca37487612e18681a10cf"} Nov 25 12:01:27 crc kubenswrapper[4706]: I1125 12:01:27.569021 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqn5d" event={"ID":"aee9d90c-4042-4e66-9535-cbc14bc710ec","Type":"ContainerStarted","Data":"44d3f30b5e9ae492b6b1495383c6bf93d074ac20050c8023f906f05856ab0a9e"} Nov 25 12:01:27 crc kubenswrapper[4706]: I1125 12:01:27.592321 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jmbgx" podStartSLOduration=2.169628502 podStartE2EDuration="4.59228809s" podCreationTimestamp="2025-11-25 12:01:23 +0000 UTC" firstStartedPulling="2025-11-25 12:01:24.512914993 +0000 UTC m=+1493.427472384" lastFinishedPulling="2025-11-25 12:01:26.935574591 +0000 UTC m=+1495.850131972" observedRunningTime="2025-11-25 12:01:27.589575262 +0000 UTC m=+1496.504132643" watchObservedRunningTime="2025-11-25 12:01:27.59228809 +0000 UTC m=+1496.506845471" Nov 25 12:01:27 crc kubenswrapper[4706]: I1125 12:01:27.819054 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n2sps" Nov 25 12:01:27 crc kubenswrapper[4706]: I1125 12:01:27.819118 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n2sps" Nov 25 12:01:27 crc kubenswrapper[4706]: I1125 12:01:27.886380 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n2sps" Nov 25 12:01:28 crc kubenswrapper[4706]: I1125 12:01:28.581089 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqn5d" event={"ID":"aee9d90c-4042-4e66-9535-cbc14bc710ec","Type":"ContainerStarted","Data":"3a455349329ed70a60400c2655366076c791cc3d7d9672909906b441039b7c0c"} Nov 25 12:01:28 crc kubenswrapper[4706]: I1125 12:01:28.640521 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n2sps" Nov 25 12:01:29 crc kubenswrapper[4706]: I1125 12:01:29.595670 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Nov 25 12:01:29 crc kubenswrapper[4706]: I1125 12:01:29.598763 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 25 12:01:29 crc kubenswrapper[4706]: I1125 12:01:29.598806 4706 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="974036435db73d96e085515bc74bf3f1f8548952748a0b190afc75921a7da26d" exitCode=137 Nov 25 12:01:29 crc kubenswrapper[4706]: I1125 12:01:29.598900 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"974036435db73d96e085515bc74bf3f1f8548952748a0b190afc75921a7da26d"} Nov 25 12:01:29 crc kubenswrapper[4706]: I1125 12:01:29.598968 4706 scope.go:117] "RemoveContainer" containerID="83b1d9c60793e3e0b5943d7cccd50656df78c4655b84e12c8dd1ba7d99a7990d" Nov 25 12:01:30 crc kubenswrapper[4706]: I1125 12:01:30.291684 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xlpt2" Nov 25 12:01:30 crc kubenswrapper[4706]: I1125 12:01:30.292017 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xlpt2" Nov 25 12:01:30 crc kubenswrapper[4706]: I1125 12:01:30.355521 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xlpt2" Nov 25 12:01:30 crc kubenswrapper[4706]: I1125 12:01:30.613124 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Nov 25 12:01:30 crc kubenswrapper[4706]: I1125 12:01:30.614111 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6c5455edd7669a130d052855dde88e2fbfa70c723e25a4f3cfc5523d7e514e09"} Nov 25 12:01:30 crc kubenswrapper[4706]: I1125 12:01:30.672283 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xlpt2" Nov 25 12:01:33 crc kubenswrapper[4706]: I1125 12:01:33.619479 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jmbgx" Nov 25 12:01:33 crc kubenswrapper[4706]: I1125 12:01:33.620037 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jmbgx" Nov 25 12:01:33 crc kubenswrapper[4706]: I1125 12:01:33.696869 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jmbgx" Nov 25 12:01:33 crc kubenswrapper[4706]: I1125 12:01:33.762966 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jmbgx" Nov 25 12:01:33 crc kubenswrapper[4706]: I1125 12:01:33.897620 4706 scope.go:117] "RemoveContainer" containerID="31e31f09eca2ee808d40a58976f9568e28a0956920ef055ff3a9b21a43ef06a5" Nov 25 12:01:33 crc kubenswrapper[4706]: I1125 12:01:33.943902 4706 scope.go:117] "RemoveContainer" containerID="6919539afd65d7c98d0e26d0af5427f4ff6e292aa53c8a23caeadcb070322f0d" Nov 25 12:01:34 crc kubenswrapper[4706]: I1125 12:01:34.014692 4706 scope.go:117] "RemoveContainer" containerID="a0ce08dbe233b30e509c7b81643703135a7c2e986bc72e2ff04292a28c7dbbaf" Nov 25 12:01:35 crc kubenswrapper[4706]: I1125 12:01:35.277443 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n2sps"] Nov 25 12:01:35 crc kubenswrapper[4706]: I1125 12:01:35.277915 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n2sps" podUID="ae9cc76a-5456-4d78-a95d-938272a5e895" containerName="registry-server" containerID="cri-o://23a58faf46254cb1c8797ce5d9a8fe720331427282e830fcc9da4c1ba0ff6759" gracePeriod=2 Nov 25 12:01:35 crc kubenswrapper[4706]: I1125 12:01:35.475181 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 25 12:01:36 crc kubenswrapper[4706]: I1125 12:01:36.697879 4706 generic.go:334] "Generic (PLEG): container finished" podID="aee9d90c-4042-4e66-9535-cbc14bc710ec" containerID="3a455349329ed70a60400c2655366076c791cc3d7d9672909906b441039b7c0c" exitCode=0 Nov 25 12:01:36 crc kubenswrapper[4706]: I1125 12:01:36.698488 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqn5d" event={"ID":"aee9d90c-4042-4e66-9535-cbc14bc710ec","Type":"ContainerDied","Data":"3a455349329ed70a60400c2655366076c791cc3d7d9672909906b441039b7c0c"} Nov 25 12:01:37 crc kubenswrapper[4706]: I1125 12:01:37.713360 4706 generic.go:334] "Generic (PLEG): container finished" podID="ae9cc76a-5456-4d78-a95d-938272a5e895" containerID="23a58faf46254cb1c8797ce5d9a8fe720331427282e830fcc9da4c1ba0ff6759" exitCode=0 Nov 25 12:01:37 crc kubenswrapper[4706]: I1125 12:01:37.713453 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n2sps" event={"ID":"ae9cc76a-5456-4d78-a95d-938272a5e895","Type":"ContainerDied","Data":"23a58faf46254cb1c8797ce5d9a8fe720331427282e830fcc9da4c1ba0ff6759"} Nov 25 12:01:37 crc kubenswrapper[4706]: E1125 12:01:37.819689 4706 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 23a58faf46254cb1c8797ce5d9a8fe720331427282e830fcc9da4c1ba0ff6759 is running failed: container process not found" containerID="23a58faf46254cb1c8797ce5d9a8fe720331427282e830fcc9da4c1ba0ff6759" cmd=["grpc_health_probe","-addr=:50051"] Nov 25 12:01:37 crc kubenswrapper[4706]: E1125 12:01:37.820343 4706 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 23a58faf46254cb1c8797ce5d9a8fe720331427282e830fcc9da4c1ba0ff6759 is running failed: container process not found" containerID="23a58faf46254cb1c8797ce5d9a8fe720331427282e830fcc9da4c1ba0ff6759" cmd=["grpc_health_probe","-addr=:50051"] Nov 25 12:01:37 crc kubenswrapper[4706]: E1125 12:01:37.820821 4706 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 23a58faf46254cb1c8797ce5d9a8fe720331427282e830fcc9da4c1ba0ff6759 is running failed: container process not found" containerID="23a58faf46254cb1c8797ce5d9a8fe720331427282e830fcc9da4c1ba0ff6759" cmd=["grpc_health_probe","-addr=:50051"] Nov 25 12:01:37 crc kubenswrapper[4706]: E1125 12:01:37.820901 4706 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 23a58faf46254cb1c8797ce5d9a8fe720331427282e830fcc9da4c1ba0ff6759 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-n2sps" podUID="ae9cc76a-5456-4d78-a95d-938272a5e895" containerName="registry-server" Nov 25 12:01:38 crc kubenswrapper[4706]: I1125 12:01:38.550770 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n2sps" Nov 25 12:01:38 crc kubenswrapper[4706]: I1125 12:01:38.670898 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhwf5\" (UniqueName: \"kubernetes.io/projected/ae9cc76a-5456-4d78-a95d-938272a5e895-kube-api-access-rhwf5\") pod \"ae9cc76a-5456-4d78-a95d-938272a5e895\" (UID: \"ae9cc76a-5456-4d78-a95d-938272a5e895\") " Nov 25 12:01:38 crc kubenswrapper[4706]: I1125 12:01:38.670994 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae9cc76a-5456-4d78-a95d-938272a5e895-utilities\") pod \"ae9cc76a-5456-4d78-a95d-938272a5e895\" (UID: \"ae9cc76a-5456-4d78-a95d-938272a5e895\") " Nov 25 12:01:38 crc kubenswrapper[4706]: I1125 12:01:38.671061 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae9cc76a-5456-4d78-a95d-938272a5e895-catalog-content\") pod \"ae9cc76a-5456-4d78-a95d-938272a5e895\" (UID: \"ae9cc76a-5456-4d78-a95d-938272a5e895\") " Nov 25 12:01:38 crc kubenswrapper[4706]: I1125 12:01:38.671645 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae9cc76a-5456-4d78-a95d-938272a5e895-utilities" (OuterVolumeSpecName: "utilities") pod "ae9cc76a-5456-4d78-a95d-938272a5e895" (UID: "ae9cc76a-5456-4d78-a95d-938272a5e895"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:01:38 crc kubenswrapper[4706]: I1125 12:01:38.676354 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xlpt2"] Nov 25 12:01:38 crc kubenswrapper[4706]: I1125 12:01:38.676803 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xlpt2" podUID="e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6" containerName="registry-server" containerID="cri-o://da8d4d0b0a1a7896a32ccb9d96839ed7ae4497f43335dcef607d0390f5f20b93" gracePeriod=2 Nov 25 12:01:38 crc kubenswrapper[4706]: I1125 12:01:38.679289 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae9cc76a-5456-4d78-a95d-938272a5e895-kube-api-access-rhwf5" (OuterVolumeSpecName: "kube-api-access-rhwf5") pod "ae9cc76a-5456-4d78-a95d-938272a5e895" (UID: "ae9cc76a-5456-4d78-a95d-938272a5e895"). InnerVolumeSpecName "kube-api-access-rhwf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:01:38 crc kubenswrapper[4706]: I1125 12:01:38.688606 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhwf5\" (UniqueName: \"kubernetes.io/projected/ae9cc76a-5456-4d78-a95d-938272a5e895-kube-api-access-rhwf5\") on node \"crc\" DevicePath \"\"" Nov 25 12:01:38 crc kubenswrapper[4706]: I1125 12:01:38.688642 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae9cc76a-5456-4d78-a95d-938272a5e895-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:01:38 crc kubenswrapper[4706]: I1125 12:01:38.690972 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae9cc76a-5456-4d78-a95d-938272a5e895-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae9cc76a-5456-4d78-a95d-938272a5e895" (UID: "ae9cc76a-5456-4d78-a95d-938272a5e895"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:01:38 crc kubenswrapper[4706]: I1125 12:01:38.731865 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n2sps" event={"ID":"ae9cc76a-5456-4d78-a95d-938272a5e895","Type":"ContainerDied","Data":"8c2a4826275d34495aa86d3fc638f95ec8417373fc1ee9c0ed4f71ba8f62b87c"} Nov 25 12:01:38 crc kubenswrapper[4706]: I1125 12:01:38.731921 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n2sps" Nov 25 12:01:38 crc kubenswrapper[4706]: I1125 12:01:38.731931 4706 scope.go:117] "RemoveContainer" containerID="23a58faf46254cb1c8797ce5d9a8fe720331427282e830fcc9da4c1ba0ff6759" Nov 25 12:01:38 crc kubenswrapper[4706]: I1125 12:01:38.734867 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqn5d" event={"ID":"aee9d90c-4042-4e66-9535-cbc14bc710ec","Type":"ContainerStarted","Data":"599bb0331f0cc5cb3b3dc7964cf325fbdd2532c3ca3514db1208c0b3fb01ff9f"} Nov 25 12:01:38 crc kubenswrapper[4706]: I1125 12:01:38.762319 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xqn5d" podStartSLOduration=3.083795866 podStartE2EDuration="13.762279665s" podCreationTimestamp="2025-11-25 12:01:25 +0000 UTC" firstStartedPulling="2025-11-25 12:01:27.571463025 +0000 UTC m=+1496.486020406" lastFinishedPulling="2025-11-25 12:01:38.249946834 +0000 UTC m=+1507.164504205" observedRunningTime="2025-11-25 12:01:38.753918674 +0000 UTC m=+1507.668476055" watchObservedRunningTime="2025-11-25 12:01:38.762279665 +0000 UTC m=+1507.676837046" Nov 25 12:01:38 crc kubenswrapper[4706]: I1125 12:01:38.790285 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae9cc76a-5456-4d78-a95d-938272a5e895-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:01:38 crc kubenswrapper[4706]: I1125 12:01:38.877374 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jmbgx"] Nov 25 12:01:38 crc kubenswrapper[4706]: I1125 12:01:38.877657 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jmbgx" podUID="ae8172ec-5a1c-40ce-a6c3-49614eebf1ef" containerName="registry-server" containerID="cri-o://244f96ba61607b9fb5d5395658555520b6de7dfd30794c37618ae5f5e892c840" gracePeriod=2 Nov 25 12:01:38 crc kubenswrapper[4706]: I1125 12:01:38.884676 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n2sps"] Nov 25 12:01:38 crc kubenswrapper[4706]: I1125 12:01:38.894514 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n2sps"] Nov 25 12:01:38 crc kubenswrapper[4706]: I1125 12:01:38.895967 4706 scope.go:117] "RemoveContainer" containerID="3440bad00369ce9656f8065f327e6c9e101cdc5cc1fa945df53204627c2baf15" Nov 25 12:01:38 crc kubenswrapper[4706]: I1125 12:01:38.937918 4706 scope.go:117] "RemoveContainer" containerID="f7fd9ade3c08a185e79942da706c746f96a37a2332f0d9d19c80578b3bef3cc9" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.116693 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.121535 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.178021 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xlpt2" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.343739 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6-utilities\") pod \"e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6\" (UID: \"e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6\") " Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.343777 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs2k2\" (UniqueName: \"kubernetes.io/projected/e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6-kube-api-access-qs2k2\") pod \"e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6\" (UID: \"e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6\") " Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.343833 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6-catalog-content\") pod \"e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6\" (UID: \"e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6\") " Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.350280 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6-utilities" (OuterVolumeSpecName: "utilities") pod "e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6" (UID: "e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.358744 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6-kube-api-access-qs2k2" (OuterVolumeSpecName: "kube-api-access-qs2k2") pod "e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6" (UID: "e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6"). InnerVolumeSpecName "kube-api-access-qs2k2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.450695 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.450825 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs2k2\" (UniqueName: \"kubernetes.io/projected/e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6-kube-api-access-qs2k2\") on node \"crc\" DevicePath \"\"" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.485259 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6" (UID: "e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.553152 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.576425 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jmbgx" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.654643 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae8172ec-5a1c-40ce-a6c3-49614eebf1ef-utilities\") pod \"ae8172ec-5a1c-40ce-a6c3-49614eebf1ef\" (UID: \"ae8172ec-5a1c-40ce-a6c3-49614eebf1ef\") " Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.654728 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82fp9\" (UniqueName: \"kubernetes.io/projected/ae8172ec-5a1c-40ce-a6c3-49614eebf1ef-kube-api-access-82fp9\") pod \"ae8172ec-5a1c-40ce-a6c3-49614eebf1ef\" (UID: \"ae8172ec-5a1c-40ce-a6c3-49614eebf1ef\") " Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.654874 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae8172ec-5a1c-40ce-a6c3-49614eebf1ef-catalog-content\") pod \"ae8172ec-5a1c-40ce-a6c3-49614eebf1ef\" (UID: \"ae8172ec-5a1c-40ce-a6c3-49614eebf1ef\") " Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.657543 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae8172ec-5a1c-40ce-a6c3-49614eebf1ef-utilities" (OuterVolumeSpecName: "utilities") pod "ae8172ec-5a1c-40ce-a6c3-49614eebf1ef" (UID: "ae8172ec-5a1c-40ce-a6c3-49614eebf1ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.662149 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae8172ec-5a1c-40ce-a6c3-49614eebf1ef-kube-api-access-82fp9" (OuterVolumeSpecName: "kube-api-access-82fp9") pod "ae8172ec-5a1c-40ce-a6c3-49614eebf1ef" (UID: "ae8172ec-5a1c-40ce-a6c3-49614eebf1ef"). InnerVolumeSpecName "kube-api-access-82fp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.712159 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae8172ec-5a1c-40ce-a6c3-49614eebf1ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae8172ec-5a1c-40ce-a6c3-49614eebf1ef" (UID: "ae8172ec-5a1c-40ce-a6c3-49614eebf1ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.749166 4706 generic.go:334] "Generic (PLEG): container finished" podID="e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6" containerID="da8d4d0b0a1a7896a32ccb9d96839ed7ae4497f43335dcef607d0390f5f20b93" exitCode=0 Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.749242 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xlpt2" event={"ID":"e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6","Type":"ContainerDied","Data":"da8d4d0b0a1a7896a32ccb9d96839ed7ae4497f43335dcef607d0390f5f20b93"} Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.749275 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xlpt2" event={"ID":"e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6","Type":"ContainerDied","Data":"36a7d006bde5480b1512a94e5e9e3d705bdc3d6af0a28d1b4b818b8aadda8d1b"} Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.749420 4706 scope.go:117] "RemoveContainer" containerID="da8d4d0b0a1a7896a32ccb9d96839ed7ae4497f43335dcef607d0390f5f20b93" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.749612 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xlpt2" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.756627 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae8172ec-5a1c-40ce-a6c3-49614eebf1ef-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.756663 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae8172ec-5a1c-40ce-a6c3-49614eebf1ef-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.756674 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82fp9\" (UniqueName: \"kubernetes.io/projected/ae8172ec-5a1c-40ce-a6c3-49614eebf1ef-kube-api-access-82fp9\") on node \"crc\" DevicePath \"\"" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.758617 4706 generic.go:334] "Generic (PLEG): container finished" podID="ae8172ec-5a1c-40ce-a6c3-49614eebf1ef" containerID="244f96ba61607b9fb5d5395658555520b6de7dfd30794c37618ae5f5e892c840" exitCode=0 Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.758811 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jmbgx" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.758853 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmbgx" event={"ID":"ae8172ec-5a1c-40ce-a6c3-49614eebf1ef","Type":"ContainerDied","Data":"244f96ba61607b9fb5d5395658555520b6de7dfd30794c37618ae5f5e892c840"} Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.759334 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmbgx" event={"ID":"ae8172ec-5a1c-40ce-a6c3-49614eebf1ef","Type":"ContainerDied","Data":"4f6636724a11684ffe2d5cec9ef94b75bba20a2719734d16a7e6d84ec19b7002"} Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.776040 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.788618 4706 scope.go:117] "RemoveContainer" containerID="447b0d42ca10aada0b8d99755d758bebc001e8b324bf6b11db807053e82db634" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.821502 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xlpt2"] Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.834464 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xlpt2"] Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.841913 4706 scope.go:117] "RemoveContainer" containerID="55a1aa231467684f0db44c3fb6a2012229970fd7f2a0aa9fef27da80e7e034b8" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.848427 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jmbgx"] Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.861607 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jmbgx"] Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.870116 4706 scope.go:117] "RemoveContainer" containerID="da8d4d0b0a1a7896a32ccb9d96839ed7ae4497f43335dcef607d0390f5f20b93" Nov 25 12:01:39 crc kubenswrapper[4706]: E1125 12:01:39.870685 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da8d4d0b0a1a7896a32ccb9d96839ed7ae4497f43335dcef607d0390f5f20b93\": container with ID starting with da8d4d0b0a1a7896a32ccb9d96839ed7ae4497f43335dcef607d0390f5f20b93 not found: ID does not exist" containerID="da8d4d0b0a1a7896a32ccb9d96839ed7ae4497f43335dcef607d0390f5f20b93" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.870752 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da8d4d0b0a1a7896a32ccb9d96839ed7ae4497f43335dcef607d0390f5f20b93"} err="failed to get container status \"da8d4d0b0a1a7896a32ccb9d96839ed7ae4497f43335dcef607d0390f5f20b93\": rpc error: code = NotFound desc = could not find container \"da8d4d0b0a1a7896a32ccb9d96839ed7ae4497f43335dcef607d0390f5f20b93\": container with ID starting with da8d4d0b0a1a7896a32ccb9d96839ed7ae4497f43335dcef607d0390f5f20b93 not found: ID does not exist" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.870784 4706 scope.go:117] "RemoveContainer" containerID="447b0d42ca10aada0b8d99755d758bebc001e8b324bf6b11db807053e82db634" Nov 25 12:01:39 crc kubenswrapper[4706]: E1125 12:01:39.871074 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"447b0d42ca10aada0b8d99755d758bebc001e8b324bf6b11db807053e82db634\": container with ID starting with 447b0d42ca10aada0b8d99755d758bebc001e8b324bf6b11db807053e82db634 not found: ID does not exist" containerID="447b0d42ca10aada0b8d99755d758bebc001e8b324bf6b11db807053e82db634" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.871115 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"447b0d42ca10aada0b8d99755d758bebc001e8b324bf6b11db807053e82db634"} err="failed to get container status \"447b0d42ca10aada0b8d99755d758bebc001e8b324bf6b11db807053e82db634\": rpc error: code = NotFound desc = could not find container \"447b0d42ca10aada0b8d99755d758bebc001e8b324bf6b11db807053e82db634\": container with ID starting with 447b0d42ca10aada0b8d99755d758bebc001e8b324bf6b11db807053e82db634 not found: ID does not exist" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.871145 4706 scope.go:117] "RemoveContainer" containerID="55a1aa231467684f0db44c3fb6a2012229970fd7f2a0aa9fef27da80e7e034b8" Nov 25 12:01:39 crc kubenswrapper[4706]: E1125 12:01:39.871572 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55a1aa231467684f0db44c3fb6a2012229970fd7f2a0aa9fef27da80e7e034b8\": container with ID starting with 55a1aa231467684f0db44c3fb6a2012229970fd7f2a0aa9fef27da80e7e034b8 not found: ID does not exist" containerID="55a1aa231467684f0db44c3fb6a2012229970fd7f2a0aa9fef27da80e7e034b8" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.871601 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55a1aa231467684f0db44c3fb6a2012229970fd7f2a0aa9fef27da80e7e034b8"} err="failed to get container status \"55a1aa231467684f0db44c3fb6a2012229970fd7f2a0aa9fef27da80e7e034b8\": rpc error: code = NotFound desc = could not find container \"55a1aa231467684f0db44c3fb6a2012229970fd7f2a0aa9fef27da80e7e034b8\": container with ID starting with 55a1aa231467684f0db44c3fb6a2012229970fd7f2a0aa9fef27da80e7e034b8 not found: ID does not exist" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.871621 4706 scope.go:117] "RemoveContainer" containerID="244f96ba61607b9fb5d5395658555520b6de7dfd30794c37618ae5f5e892c840" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.901988 4706 scope.go:117] "RemoveContainer" containerID="367788d0a7debc0b3b390b81690a0c1599cd39021377866d54eb1088bc522715" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.928145 4706 scope.go:117] "RemoveContainer" containerID="a9b402576daf2b6e7bba96069f94edfcfa6b73c2e71539e3f2457baf05fe8775" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.948679 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae8172ec-5a1c-40ce-a6c3-49614eebf1ef" path="/var/lib/kubelet/pods/ae8172ec-5a1c-40ce-a6c3-49614eebf1ef/volumes" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.949820 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae9cc76a-5456-4d78-a95d-938272a5e895" path="/var/lib/kubelet/pods/ae9cc76a-5456-4d78-a95d-938272a5e895/volumes" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.950755 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6" path="/var/lib/kubelet/pods/e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6/volumes" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.954676 4706 scope.go:117] "RemoveContainer" containerID="244f96ba61607b9fb5d5395658555520b6de7dfd30794c37618ae5f5e892c840" Nov 25 12:01:39 crc kubenswrapper[4706]: E1125 12:01:39.955648 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"244f96ba61607b9fb5d5395658555520b6de7dfd30794c37618ae5f5e892c840\": container with ID starting with 244f96ba61607b9fb5d5395658555520b6de7dfd30794c37618ae5f5e892c840 not found: ID does not exist" containerID="244f96ba61607b9fb5d5395658555520b6de7dfd30794c37618ae5f5e892c840" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.955748 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"244f96ba61607b9fb5d5395658555520b6de7dfd30794c37618ae5f5e892c840"} err="failed to get container status \"244f96ba61607b9fb5d5395658555520b6de7dfd30794c37618ae5f5e892c840\": rpc error: code = NotFound desc = could not find container \"244f96ba61607b9fb5d5395658555520b6de7dfd30794c37618ae5f5e892c840\": container with ID starting with 244f96ba61607b9fb5d5395658555520b6de7dfd30794c37618ae5f5e892c840 not found: ID does not exist" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.955869 4706 scope.go:117] "RemoveContainer" containerID="367788d0a7debc0b3b390b81690a0c1599cd39021377866d54eb1088bc522715" Nov 25 12:01:39 crc kubenswrapper[4706]: E1125 12:01:39.956319 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"367788d0a7debc0b3b390b81690a0c1599cd39021377866d54eb1088bc522715\": container with ID starting with 367788d0a7debc0b3b390b81690a0c1599cd39021377866d54eb1088bc522715 not found: ID does not exist" containerID="367788d0a7debc0b3b390b81690a0c1599cd39021377866d54eb1088bc522715" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.956429 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"367788d0a7debc0b3b390b81690a0c1599cd39021377866d54eb1088bc522715"} err="failed to get container status \"367788d0a7debc0b3b390b81690a0c1599cd39021377866d54eb1088bc522715\": rpc error: code = NotFound desc = could not find container \"367788d0a7debc0b3b390b81690a0c1599cd39021377866d54eb1088bc522715\": container with ID starting with 367788d0a7debc0b3b390b81690a0c1599cd39021377866d54eb1088bc522715 not found: ID does not exist" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.956518 4706 scope.go:117] "RemoveContainer" containerID="a9b402576daf2b6e7bba96069f94edfcfa6b73c2e71539e3f2457baf05fe8775" Nov 25 12:01:39 crc kubenswrapper[4706]: E1125 12:01:39.957060 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9b402576daf2b6e7bba96069f94edfcfa6b73c2e71539e3f2457baf05fe8775\": container with ID starting with a9b402576daf2b6e7bba96069f94edfcfa6b73c2e71539e3f2457baf05fe8775 not found: ID does not exist" containerID="a9b402576daf2b6e7bba96069f94edfcfa6b73c2e71539e3f2457baf05fe8775" Nov 25 12:01:39 crc kubenswrapper[4706]: I1125 12:01:39.957177 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9b402576daf2b6e7bba96069f94edfcfa6b73c2e71539e3f2457baf05fe8775"} err="failed to get container status \"a9b402576daf2b6e7bba96069f94edfcfa6b73c2e71539e3f2457baf05fe8775\": rpc error: code = NotFound desc = could not find container \"a9b402576daf2b6e7bba96069f94edfcfa6b73c2e71539e3f2457baf05fe8775\": container with ID starting with a9b402576daf2b6e7bba96069f94edfcfa6b73c2e71539e3f2457baf05fe8775 not found: ID does not exist" Nov 25 12:01:40 crc kubenswrapper[4706]: I1125 12:01:40.021071 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 12:01:45 crc kubenswrapper[4706]: I1125 12:01:45.284757 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z6ffp"] Nov 25 12:01:45 crc kubenswrapper[4706]: E1125 12:01:45.285875 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae9cc76a-5456-4d78-a95d-938272a5e895" containerName="extract-utilities" Nov 25 12:01:45 crc kubenswrapper[4706]: I1125 12:01:45.285890 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae9cc76a-5456-4d78-a95d-938272a5e895" containerName="extract-utilities" Nov 25 12:01:45 crc kubenswrapper[4706]: E1125 12:01:45.285910 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae9cc76a-5456-4d78-a95d-938272a5e895" containerName="extract-content" Nov 25 12:01:45 crc kubenswrapper[4706]: I1125 12:01:45.285916 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae9cc76a-5456-4d78-a95d-938272a5e895" containerName="extract-content" Nov 25 12:01:45 crc kubenswrapper[4706]: E1125 12:01:45.285926 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6" containerName="extract-content" Nov 25 12:01:45 crc kubenswrapper[4706]: I1125 12:01:45.285932 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6" containerName="extract-content" Nov 25 12:01:45 crc kubenswrapper[4706]: E1125 12:01:45.285946 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6" containerName="registry-server" Nov 25 12:01:45 crc kubenswrapper[4706]: I1125 12:01:45.285953 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6" containerName="registry-server" Nov 25 12:01:45 crc kubenswrapper[4706]: E1125 12:01:45.285962 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae9cc76a-5456-4d78-a95d-938272a5e895" containerName="registry-server" Nov 25 12:01:45 crc kubenswrapper[4706]: I1125 12:01:45.285968 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae9cc76a-5456-4d78-a95d-938272a5e895" containerName="registry-server" Nov 25 12:01:45 crc kubenswrapper[4706]: E1125 12:01:45.285985 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6" containerName="extract-utilities" Nov 25 12:01:45 crc kubenswrapper[4706]: I1125 12:01:45.285991 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6" containerName="extract-utilities" Nov 25 12:01:45 crc kubenswrapper[4706]: E1125 12:01:45.286003 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae8172ec-5a1c-40ce-a6c3-49614eebf1ef" containerName="extract-content" Nov 25 12:01:45 crc kubenswrapper[4706]: I1125 12:01:45.286008 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae8172ec-5a1c-40ce-a6c3-49614eebf1ef" containerName="extract-content" Nov 25 12:01:45 crc kubenswrapper[4706]: E1125 12:01:45.286018 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae8172ec-5a1c-40ce-a6c3-49614eebf1ef" containerName="registry-server" Nov 25 12:01:45 crc kubenswrapper[4706]: I1125 12:01:45.286023 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae8172ec-5a1c-40ce-a6c3-49614eebf1ef" containerName="registry-server" Nov 25 12:01:45 crc kubenswrapper[4706]: E1125 12:01:45.286033 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae8172ec-5a1c-40ce-a6c3-49614eebf1ef" containerName="extract-utilities" Nov 25 12:01:45 crc kubenswrapper[4706]: I1125 12:01:45.286039 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae8172ec-5a1c-40ce-a6c3-49614eebf1ef" containerName="extract-utilities" Nov 25 12:01:45 crc kubenswrapper[4706]: I1125 12:01:45.286238 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="e32f7255-77b8-4ef8-b0b1-f83e70d9f3f6" containerName="registry-server" Nov 25 12:01:45 crc kubenswrapper[4706]: I1125 12:01:45.286262 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae8172ec-5a1c-40ce-a6c3-49614eebf1ef" containerName="registry-server" Nov 25 12:01:45 crc kubenswrapper[4706]: I1125 12:01:45.286272 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae9cc76a-5456-4d78-a95d-938272a5e895" containerName="registry-server" Nov 25 12:01:45 crc kubenswrapper[4706]: I1125 12:01:45.288198 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z6ffp" Nov 25 12:01:45 crc kubenswrapper[4706]: I1125 12:01:45.322191 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z6ffp"] Nov 25 12:01:45 crc kubenswrapper[4706]: I1125 12:01:45.377798 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47918fcd-d027-4db2-8964-4dbe4fb179f8-catalog-content\") pod \"redhat-marketplace-z6ffp\" (UID: \"47918fcd-d027-4db2-8964-4dbe4fb179f8\") " pod="openshift-marketplace/redhat-marketplace-z6ffp" Nov 25 12:01:45 crc kubenswrapper[4706]: I1125 12:01:45.377879 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47918fcd-d027-4db2-8964-4dbe4fb179f8-utilities\") pod \"redhat-marketplace-z6ffp\" (UID: \"47918fcd-d027-4db2-8964-4dbe4fb179f8\") " pod="openshift-marketplace/redhat-marketplace-z6ffp" Nov 25 12:01:45 crc kubenswrapper[4706]: I1125 12:01:45.377935 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvj97\" (UniqueName: \"kubernetes.io/projected/47918fcd-d027-4db2-8964-4dbe4fb179f8-kube-api-access-hvj97\") pod \"redhat-marketplace-z6ffp\" (UID: \"47918fcd-d027-4db2-8964-4dbe4fb179f8\") " pod="openshift-marketplace/redhat-marketplace-z6ffp" Nov 25 12:01:45 crc kubenswrapper[4706]: I1125 12:01:45.486602 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47918fcd-d027-4db2-8964-4dbe4fb179f8-catalog-content\") pod \"redhat-marketplace-z6ffp\" (UID: \"47918fcd-d027-4db2-8964-4dbe4fb179f8\") " pod="openshift-marketplace/redhat-marketplace-z6ffp" Nov 25 12:01:45 crc kubenswrapper[4706]: I1125 12:01:45.486724 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47918fcd-d027-4db2-8964-4dbe4fb179f8-utilities\") pod \"redhat-marketplace-z6ffp\" (UID: \"47918fcd-d027-4db2-8964-4dbe4fb179f8\") " pod="openshift-marketplace/redhat-marketplace-z6ffp" Nov 25 12:01:45 crc kubenswrapper[4706]: I1125 12:01:45.486789 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvj97\" (UniqueName: \"kubernetes.io/projected/47918fcd-d027-4db2-8964-4dbe4fb179f8-kube-api-access-hvj97\") pod \"redhat-marketplace-z6ffp\" (UID: \"47918fcd-d027-4db2-8964-4dbe4fb179f8\") " pod="openshift-marketplace/redhat-marketplace-z6ffp" Nov 25 12:01:45 crc kubenswrapper[4706]: I1125 12:01:45.487426 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47918fcd-d027-4db2-8964-4dbe4fb179f8-utilities\") pod \"redhat-marketplace-z6ffp\" (UID: \"47918fcd-d027-4db2-8964-4dbe4fb179f8\") " pod="openshift-marketplace/redhat-marketplace-z6ffp" Nov 25 12:01:45 crc kubenswrapper[4706]: I1125 12:01:45.488693 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47918fcd-d027-4db2-8964-4dbe4fb179f8-catalog-content\") pod \"redhat-marketplace-z6ffp\" (UID: \"47918fcd-d027-4db2-8964-4dbe4fb179f8\") " pod="openshift-marketplace/redhat-marketplace-z6ffp" Nov 25 12:01:45 crc kubenswrapper[4706]: I1125 12:01:45.510467 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvj97\" (UniqueName: \"kubernetes.io/projected/47918fcd-d027-4db2-8964-4dbe4fb179f8-kube-api-access-hvj97\") pod \"redhat-marketplace-z6ffp\" (UID: \"47918fcd-d027-4db2-8964-4dbe4fb179f8\") " pod="openshift-marketplace/redhat-marketplace-z6ffp" Nov 25 12:01:45 crc kubenswrapper[4706]: I1125 12:01:45.627992 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z6ffp" Nov 25 12:01:45 crc kubenswrapper[4706]: I1125 12:01:45.993995 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xqn5d" Nov 25 12:01:45 crc kubenswrapper[4706]: I1125 12:01:45.994363 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xqn5d" Nov 25 12:01:46 crc kubenswrapper[4706]: I1125 12:01:46.046071 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xqn5d" Nov 25 12:01:46 crc kubenswrapper[4706]: I1125 12:01:46.143812 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z6ffp"] Nov 25 12:01:46 crc kubenswrapper[4706]: I1125 12:01:46.861104 4706 generic.go:334] "Generic (PLEG): container finished" podID="47918fcd-d027-4db2-8964-4dbe4fb179f8" containerID="0bbf10261049d2aa448733f4b32a4392f826fd43959440e30ade19fb45eaa927" exitCode=0 Nov 25 12:01:46 crc kubenswrapper[4706]: I1125 12:01:46.863170 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z6ffp" event={"ID":"47918fcd-d027-4db2-8964-4dbe4fb179f8","Type":"ContainerDied","Data":"0bbf10261049d2aa448733f4b32a4392f826fd43959440e30ade19fb45eaa927"} Nov 25 12:01:46 crc kubenswrapper[4706]: I1125 12:01:46.863201 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z6ffp" event={"ID":"47918fcd-d027-4db2-8964-4dbe4fb179f8","Type":"ContainerStarted","Data":"d7cebda8839b392b872b21d2ceeb9137bb2a2b11379c0928a472c3558e3c669e"} Nov 25 12:01:46 crc kubenswrapper[4706]: I1125 12:01:46.964333 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xqn5d" Nov 25 12:01:47 crc kubenswrapper[4706]: I1125 12:01:47.083655 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jq8c8"] Nov 25 12:01:47 crc kubenswrapper[4706]: I1125 12:01:47.086400 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jq8c8" Nov 25 12:01:47 crc kubenswrapper[4706]: I1125 12:01:47.098819 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jq8c8"] Nov 25 12:01:47 crc kubenswrapper[4706]: I1125 12:01:47.219028 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/272a2de0-ac52-46e5-aa78-569b642ad4bb-catalog-content\") pod \"certified-operators-jq8c8\" (UID: \"272a2de0-ac52-46e5-aa78-569b642ad4bb\") " pod="openshift-marketplace/certified-operators-jq8c8" Nov 25 12:01:47 crc kubenswrapper[4706]: I1125 12:01:47.219147 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/272a2de0-ac52-46e5-aa78-569b642ad4bb-utilities\") pod \"certified-operators-jq8c8\" (UID: \"272a2de0-ac52-46e5-aa78-569b642ad4bb\") " pod="openshift-marketplace/certified-operators-jq8c8" Nov 25 12:01:47 crc kubenswrapper[4706]: I1125 12:01:47.219289 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nthjc\" (UniqueName: \"kubernetes.io/projected/272a2de0-ac52-46e5-aa78-569b642ad4bb-kube-api-access-nthjc\") pod \"certified-operators-jq8c8\" (UID: \"272a2de0-ac52-46e5-aa78-569b642ad4bb\") " pod="openshift-marketplace/certified-operators-jq8c8" Nov 25 12:01:47 crc kubenswrapper[4706]: I1125 12:01:47.296812 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7d76b4f6c7-xxkgj" Nov 25 12:01:47 crc kubenswrapper[4706]: I1125 12:01:47.320521 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nthjc\" (UniqueName: \"kubernetes.io/projected/272a2de0-ac52-46e5-aa78-569b642ad4bb-kube-api-access-nthjc\") pod \"certified-operators-jq8c8\" (UID: \"272a2de0-ac52-46e5-aa78-569b642ad4bb\") " pod="openshift-marketplace/certified-operators-jq8c8" Nov 25 12:01:47 crc kubenswrapper[4706]: I1125 12:01:47.320626 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/272a2de0-ac52-46e5-aa78-569b642ad4bb-catalog-content\") pod \"certified-operators-jq8c8\" (UID: \"272a2de0-ac52-46e5-aa78-569b642ad4bb\") " pod="openshift-marketplace/certified-operators-jq8c8" Nov 25 12:01:47 crc kubenswrapper[4706]: I1125 12:01:47.320676 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/272a2de0-ac52-46e5-aa78-569b642ad4bb-utilities\") pod \"certified-operators-jq8c8\" (UID: \"272a2de0-ac52-46e5-aa78-569b642ad4bb\") " pod="openshift-marketplace/certified-operators-jq8c8" Nov 25 12:01:47 crc kubenswrapper[4706]: I1125 12:01:47.321273 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/272a2de0-ac52-46e5-aa78-569b642ad4bb-utilities\") pod \"certified-operators-jq8c8\" (UID: \"272a2de0-ac52-46e5-aa78-569b642ad4bb\") " pod="openshift-marketplace/certified-operators-jq8c8" Nov 25 12:01:47 crc kubenswrapper[4706]: I1125 12:01:47.321294 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/272a2de0-ac52-46e5-aa78-569b642ad4bb-catalog-content\") pod \"certified-operators-jq8c8\" (UID: \"272a2de0-ac52-46e5-aa78-569b642ad4bb\") " pod="openshift-marketplace/certified-operators-jq8c8" Nov 25 12:01:47 crc kubenswrapper[4706]: I1125 12:01:47.340271 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nthjc\" (UniqueName: \"kubernetes.io/projected/272a2de0-ac52-46e5-aa78-569b642ad4bb-kube-api-access-nthjc\") pod \"certified-operators-jq8c8\" (UID: \"272a2de0-ac52-46e5-aa78-569b642ad4bb\") " pod="openshift-marketplace/certified-operators-jq8c8" Nov 25 12:01:47 crc kubenswrapper[4706]: I1125 12:01:47.419313 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jq8c8" Nov 25 12:01:47 crc kubenswrapper[4706]: I1125 12:01:47.873063 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z6ffp" event={"ID":"47918fcd-d027-4db2-8964-4dbe4fb179f8","Type":"ContainerStarted","Data":"2460055270aa58f2ad90494b8f29c6ec2edce8b1a14f079acd28d448cdcf889c"} Nov 25 12:01:47 crc kubenswrapper[4706]: I1125 12:01:47.974411 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jq8c8"] Nov 25 12:01:47 crc kubenswrapper[4706]: W1125 12:01:47.979931 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod272a2de0_ac52_46e5_aa78_569b642ad4bb.slice/crio-bf67c7671ead3e0ba624e4cde60d4c809d7a302c846cda4b7b2a16f8327d79e1 WatchSource:0}: Error finding container bf67c7671ead3e0ba624e4cde60d4c809d7a302c846cda4b7b2a16f8327d79e1: Status 404 returned error can't find the container with id bf67c7671ead3e0ba624e4cde60d4c809d7a302c846cda4b7b2a16f8327d79e1 Nov 25 12:01:48 crc kubenswrapper[4706]: I1125 12:01:48.882351 4706 generic.go:334] "Generic (PLEG): container finished" podID="272a2de0-ac52-46e5-aa78-569b642ad4bb" containerID="bec8ba699cab9b008cb3f499e9dfe7d52b12842dc86102ca750a0c6d41c5869e" exitCode=0 Nov 25 12:01:48 crc kubenswrapper[4706]: I1125 12:01:48.882396 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jq8c8" event={"ID":"272a2de0-ac52-46e5-aa78-569b642ad4bb","Type":"ContainerDied","Data":"bec8ba699cab9b008cb3f499e9dfe7d52b12842dc86102ca750a0c6d41c5869e"} Nov 25 12:01:48 crc kubenswrapper[4706]: I1125 12:01:48.882709 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jq8c8" event={"ID":"272a2de0-ac52-46e5-aa78-569b642ad4bb","Type":"ContainerStarted","Data":"bf67c7671ead3e0ba624e4cde60d4c809d7a302c846cda4b7b2a16f8327d79e1"} Nov 25 12:01:48 crc kubenswrapper[4706]: I1125 12:01:48.886291 4706 generic.go:334] "Generic (PLEG): container finished" podID="47918fcd-d027-4db2-8964-4dbe4fb179f8" containerID="2460055270aa58f2ad90494b8f29c6ec2edce8b1a14f079acd28d448cdcf889c" exitCode=0 Nov 25 12:01:48 crc kubenswrapper[4706]: I1125 12:01:48.886355 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z6ffp" event={"ID":"47918fcd-d027-4db2-8964-4dbe4fb179f8","Type":"ContainerDied","Data":"2460055270aa58f2ad90494b8f29c6ec2edce8b1a14f079acd28d448cdcf889c"} Nov 25 12:01:49 crc kubenswrapper[4706]: I1125 12:01:49.083944 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v4w8c"] Nov 25 12:01:49 crc kubenswrapper[4706]: I1125 12:01:49.086420 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v4w8c" Nov 25 12:01:49 crc kubenswrapper[4706]: I1125 12:01:49.100723 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v4w8c"] Nov 25 12:01:49 crc kubenswrapper[4706]: I1125 12:01:49.179552 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ee6af69-6304-4a7f-bfae-9e73272ce951-catalog-content\") pod \"community-operators-v4w8c\" (UID: \"2ee6af69-6304-4a7f-bfae-9e73272ce951\") " pod="openshift-marketplace/community-operators-v4w8c" Nov 25 12:01:49 crc kubenswrapper[4706]: I1125 12:01:49.179679 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ee6af69-6304-4a7f-bfae-9e73272ce951-utilities\") pod \"community-operators-v4w8c\" (UID: \"2ee6af69-6304-4a7f-bfae-9e73272ce951\") " pod="openshift-marketplace/community-operators-v4w8c" Nov 25 12:01:49 crc kubenswrapper[4706]: I1125 12:01:49.180058 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wjxf\" (UniqueName: \"kubernetes.io/projected/2ee6af69-6304-4a7f-bfae-9e73272ce951-kube-api-access-5wjxf\") pod \"community-operators-v4w8c\" (UID: \"2ee6af69-6304-4a7f-bfae-9e73272ce951\") " pod="openshift-marketplace/community-operators-v4w8c" Nov 25 12:01:49 crc kubenswrapper[4706]: I1125 12:01:49.281491 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wjxf\" (UniqueName: \"kubernetes.io/projected/2ee6af69-6304-4a7f-bfae-9e73272ce951-kube-api-access-5wjxf\") pod \"community-operators-v4w8c\" (UID: \"2ee6af69-6304-4a7f-bfae-9e73272ce951\") " pod="openshift-marketplace/community-operators-v4w8c" Nov 25 12:01:49 crc kubenswrapper[4706]: I1125 12:01:49.282271 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ee6af69-6304-4a7f-bfae-9e73272ce951-catalog-content\") pod \"community-operators-v4w8c\" (UID: \"2ee6af69-6304-4a7f-bfae-9e73272ce951\") " pod="openshift-marketplace/community-operators-v4w8c" Nov 25 12:01:49 crc kubenswrapper[4706]: I1125 12:01:49.282889 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ee6af69-6304-4a7f-bfae-9e73272ce951-utilities\") pod \"community-operators-v4w8c\" (UID: \"2ee6af69-6304-4a7f-bfae-9e73272ce951\") " pod="openshift-marketplace/community-operators-v4w8c" Nov 25 12:01:49 crc kubenswrapper[4706]: I1125 12:01:49.282802 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ee6af69-6304-4a7f-bfae-9e73272ce951-catalog-content\") pod \"community-operators-v4w8c\" (UID: \"2ee6af69-6304-4a7f-bfae-9e73272ce951\") " pod="openshift-marketplace/community-operators-v4w8c" Nov 25 12:01:49 crc kubenswrapper[4706]: I1125 12:01:49.283161 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ee6af69-6304-4a7f-bfae-9e73272ce951-utilities\") pod \"community-operators-v4w8c\" (UID: \"2ee6af69-6304-4a7f-bfae-9e73272ce951\") " pod="openshift-marketplace/community-operators-v4w8c" Nov 25 12:01:49 crc kubenswrapper[4706]: I1125 12:01:49.301582 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wjxf\" (UniqueName: \"kubernetes.io/projected/2ee6af69-6304-4a7f-bfae-9e73272ce951-kube-api-access-5wjxf\") pod \"community-operators-v4w8c\" (UID: \"2ee6af69-6304-4a7f-bfae-9e73272ce951\") " pod="openshift-marketplace/community-operators-v4w8c" Nov 25 12:01:49 crc kubenswrapper[4706]: I1125 12:01:49.437728 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v4w8c" Nov 25 12:01:49 crc kubenswrapper[4706]: W1125 12:01:49.940231 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ee6af69_6304_4a7f_bfae_9e73272ce951.slice/crio-4be139b9e74965e46620c4beda3f6bfd97b5c48f1ec0f114776c2209df68e55a WatchSource:0}: Error finding container 4be139b9e74965e46620c4beda3f6bfd97b5c48f1ec0f114776c2209df68e55a: Status 404 returned error can't find the container with id 4be139b9e74965e46620c4beda3f6bfd97b5c48f1ec0f114776c2209df68e55a Nov 25 12:01:49 crc kubenswrapper[4706]: I1125 12:01:49.945173 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v4w8c"] Nov 25 12:01:50 crc kubenswrapper[4706]: I1125 12:01:50.515327 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29401201-6qr5x"] Nov 25 12:01:50 crc kubenswrapper[4706]: I1125 12:01:50.517538 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401201-6qr5x" Nov 25 12:01:50 crc kubenswrapper[4706]: I1125 12:01:50.527550 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29401201-6qr5x"] Nov 25 12:01:50 crc kubenswrapper[4706]: I1125 12:01:50.553342 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-9s22r"] Nov 25 12:01:50 crc kubenswrapper[4706]: I1125 12:01:50.553576 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-9s22r" podUID="ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da" containerName="dnsmasq-dns" containerID="cri-o://7e431cb9c6bac547fc698b4496940ce1908f0b85b3d947d5dbee648b33a819c9" gracePeriod=10 Nov 25 12:01:50 crc kubenswrapper[4706]: I1125 12:01:50.616984 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz9jg\" (UniqueName: \"kubernetes.io/projected/6e578ce4-062a-47d6-ad7e-c1e36d257077-kube-api-access-mz9jg\") pod \"keystone-cron-29401201-6qr5x\" (UID: \"6e578ce4-062a-47d6-ad7e-c1e36d257077\") " pod="openstack/keystone-cron-29401201-6qr5x" Nov 25 12:01:50 crc kubenswrapper[4706]: I1125 12:01:50.617115 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e578ce4-062a-47d6-ad7e-c1e36d257077-config-data\") pod \"keystone-cron-29401201-6qr5x\" (UID: \"6e578ce4-062a-47d6-ad7e-c1e36d257077\") " pod="openstack/keystone-cron-29401201-6qr5x" Nov 25 12:01:50 crc kubenswrapper[4706]: I1125 12:01:50.617134 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e578ce4-062a-47d6-ad7e-c1e36d257077-combined-ca-bundle\") pod \"keystone-cron-29401201-6qr5x\" (UID: \"6e578ce4-062a-47d6-ad7e-c1e36d257077\") " pod="openstack/keystone-cron-29401201-6qr5x" Nov 25 12:01:50 crc kubenswrapper[4706]: I1125 12:01:50.617174 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6e578ce4-062a-47d6-ad7e-c1e36d257077-fernet-keys\") pod \"keystone-cron-29401201-6qr5x\" (UID: \"6e578ce4-062a-47d6-ad7e-c1e36d257077\") " pod="openstack/keystone-cron-29401201-6qr5x" Nov 25 12:01:50 crc kubenswrapper[4706]: I1125 12:01:50.719154 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e578ce4-062a-47d6-ad7e-c1e36d257077-config-data\") pod \"keystone-cron-29401201-6qr5x\" (UID: \"6e578ce4-062a-47d6-ad7e-c1e36d257077\") " pod="openstack/keystone-cron-29401201-6qr5x" Nov 25 12:01:50 crc kubenswrapper[4706]: I1125 12:01:50.719220 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e578ce4-062a-47d6-ad7e-c1e36d257077-combined-ca-bundle\") pod \"keystone-cron-29401201-6qr5x\" (UID: \"6e578ce4-062a-47d6-ad7e-c1e36d257077\") " pod="openstack/keystone-cron-29401201-6qr5x" Nov 25 12:01:50 crc kubenswrapper[4706]: I1125 12:01:50.719287 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6e578ce4-062a-47d6-ad7e-c1e36d257077-fernet-keys\") pod \"keystone-cron-29401201-6qr5x\" (UID: \"6e578ce4-062a-47d6-ad7e-c1e36d257077\") " pod="openstack/keystone-cron-29401201-6qr5x" Nov 25 12:01:50 crc kubenswrapper[4706]: I1125 12:01:50.719539 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz9jg\" (UniqueName: \"kubernetes.io/projected/6e578ce4-062a-47d6-ad7e-c1e36d257077-kube-api-access-mz9jg\") pod \"keystone-cron-29401201-6qr5x\" (UID: \"6e578ce4-062a-47d6-ad7e-c1e36d257077\") " pod="openstack/keystone-cron-29401201-6qr5x" Nov 25 12:01:50 crc kubenswrapper[4706]: I1125 12:01:50.730637 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6e578ce4-062a-47d6-ad7e-c1e36d257077-fernet-keys\") pod \"keystone-cron-29401201-6qr5x\" (UID: \"6e578ce4-062a-47d6-ad7e-c1e36d257077\") " pod="openstack/keystone-cron-29401201-6qr5x" Nov 25 12:01:50 crc kubenswrapper[4706]: I1125 12:01:50.730652 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e578ce4-062a-47d6-ad7e-c1e36d257077-config-data\") pod \"keystone-cron-29401201-6qr5x\" (UID: \"6e578ce4-062a-47d6-ad7e-c1e36d257077\") " pod="openstack/keystone-cron-29401201-6qr5x" Nov 25 12:01:50 crc kubenswrapper[4706]: I1125 12:01:50.732235 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e578ce4-062a-47d6-ad7e-c1e36d257077-combined-ca-bundle\") pod \"keystone-cron-29401201-6qr5x\" (UID: \"6e578ce4-062a-47d6-ad7e-c1e36d257077\") " pod="openstack/keystone-cron-29401201-6qr5x" Nov 25 12:01:50 crc kubenswrapper[4706]: I1125 12:01:50.766524 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz9jg\" (UniqueName: \"kubernetes.io/projected/6e578ce4-062a-47d6-ad7e-c1e36d257077-kube-api-access-mz9jg\") pod \"keystone-cron-29401201-6qr5x\" (UID: \"6e578ce4-062a-47d6-ad7e-c1e36d257077\") " pod="openstack/keystone-cron-29401201-6qr5x" Nov 25 12:01:50 crc kubenswrapper[4706]: I1125 12:01:50.814182 4706 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-79bd4cc8c9-9s22r" podUID="ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.212:5353: connect: connection refused" Nov 25 12:01:50 crc kubenswrapper[4706]: I1125 12:01:50.856490 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401201-6qr5x" Nov 25 12:01:50 crc kubenswrapper[4706]: I1125 12:01:50.922673 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v4w8c" event={"ID":"2ee6af69-6304-4a7f-bfae-9e73272ce951","Type":"ContainerStarted","Data":"4be139b9e74965e46620c4beda3f6bfd97b5c48f1ec0f114776c2209df68e55a"} Nov 25 12:01:50 crc kubenswrapper[4706]: I1125 12:01:50.943155 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z6ffp" event={"ID":"47918fcd-d027-4db2-8964-4dbe4fb179f8","Type":"ContainerStarted","Data":"1ddb1f2935269e7da687b66a142394fbbfe8cb26a2ce2fcd3bee191734165951"} Nov 25 12:01:51 crc kubenswrapper[4706]: I1125 12:01:51.535655 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29401201-6qr5x"] Nov 25 12:01:51 crc kubenswrapper[4706]: W1125 12:01:51.562251 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e578ce4_062a_47d6_ad7e_c1e36d257077.slice/crio-56e62bfda8ce3d477efba908fd4b71dce148510b11374c14eed3d97c8f2a3c44 WatchSource:0}: Error finding container 56e62bfda8ce3d477efba908fd4b71dce148510b11374c14eed3d97c8f2a3c44: Status 404 returned error can't find the container with id 56e62bfda8ce3d477efba908fd4b71dce148510b11374c14eed3d97c8f2a3c44 Nov 25 12:01:51 crc kubenswrapper[4706]: I1125 12:01:51.672038 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xqn5d"] Nov 25 12:01:51 crc kubenswrapper[4706]: I1125 12:01:51.672605 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xqn5d" podUID="aee9d90c-4042-4e66-9535-cbc14bc710ec" containerName="registry-server" containerID="cri-o://599bb0331f0cc5cb3b3dc7964cf325fbdd2532c3ca3514db1208c0b3fb01ff9f" gracePeriod=2 Nov 25 12:01:51 crc kubenswrapper[4706]: I1125 12:01:51.859895 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-9s22r" Nov 25 12:01:51 crc kubenswrapper[4706]: I1125 12:01:51.959227 4706 generic.go:334] "Generic (PLEG): container finished" podID="272a2de0-ac52-46e5-aa78-569b642ad4bb" containerID="b8bb8e5d05e0fab7c54f16cb26abc00fc80304ccb9025b748cffa1f851061ee2" exitCode=0 Nov 25 12:01:51 crc kubenswrapper[4706]: I1125 12:01:51.959277 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jq8c8" event={"ID":"272a2de0-ac52-46e5-aa78-569b642ad4bb","Type":"ContainerDied","Data":"b8bb8e5d05e0fab7c54f16cb26abc00fc80304ccb9025b748cffa1f851061ee2"} Nov 25 12:01:51 crc kubenswrapper[4706]: I1125 12:01:51.961951 4706 generic.go:334] "Generic (PLEG): container finished" podID="ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da" containerID="7e431cb9c6bac547fc698b4496940ce1908f0b85b3d947d5dbee648b33a819c9" exitCode=0 Nov 25 12:01:51 crc kubenswrapper[4706]: I1125 12:01:51.962099 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-9s22r" Nov 25 12:01:51 crc kubenswrapper[4706]: I1125 12:01:51.962158 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-9s22r" event={"ID":"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da","Type":"ContainerDied","Data":"7e431cb9c6bac547fc698b4496940ce1908f0b85b3d947d5dbee648b33a819c9"} Nov 25 12:01:51 crc kubenswrapper[4706]: I1125 12:01:51.962202 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-9s22r" event={"ID":"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da","Type":"ContainerDied","Data":"c1dc45c31f6e3f03505f688e33a044cb96a1badb7fbd00c060e330402377d5e8"} Nov 25 12:01:51 crc kubenswrapper[4706]: I1125 12:01:51.962223 4706 scope.go:117] "RemoveContainer" containerID="7e431cb9c6bac547fc698b4496940ce1908f0b85b3d947d5dbee648b33a819c9" Nov 25 12:01:51 crc kubenswrapper[4706]: I1125 12:01:51.965708 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-dns-svc\") pod \"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da\" (UID: \"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da\") " Nov 25 12:01:51 crc kubenswrapper[4706]: I1125 12:01:51.965790 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-ovsdbserver-sb\") pod \"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da\" (UID: \"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da\") " Nov 25 12:01:51 crc kubenswrapper[4706]: I1125 12:01:51.965873 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-config\") pod \"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da\" (UID: \"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da\") " Nov 25 12:01:51 crc kubenswrapper[4706]: I1125 12:01:51.965958 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-dns-swift-storage-0\") pod \"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da\" (UID: \"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da\") " Nov 25 12:01:51 crc kubenswrapper[4706]: I1125 12:01:51.966034 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-ovsdbserver-nb\") pod \"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da\" (UID: \"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da\") " Nov 25 12:01:51 crc kubenswrapper[4706]: I1125 12:01:51.966145 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvxln\" (UniqueName: \"kubernetes.io/projected/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-kube-api-access-cvxln\") pod \"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da\" (UID: \"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da\") " Nov 25 12:01:51 crc kubenswrapper[4706]: I1125 12:01:51.966252 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-openstack-edpm-ipam\") pod \"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da\" (UID: \"ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da\") " Nov 25 12:01:51 crc kubenswrapper[4706]: I1125 12:01:51.968490 4706 generic.go:334] "Generic (PLEG): container finished" podID="2ee6af69-6304-4a7f-bfae-9e73272ce951" containerID="e978fc4eb599d23fb4665edd61038317df66488740751866f98e330b61768338" exitCode=0 Nov 25 12:01:51 crc kubenswrapper[4706]: I1125 12:01:51.968565 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v4w8c" event={"ID":"2ee6af69-6304-4a7f-bfae-9e73272ce951","Type":"ContainerDied","Data":"e978fc4eb599d23fb4665edd61038317df66488740751866f98e330b61768338"} Nov 25 12:01:51 crc kubenswrapper[4706]: I1125 12:01:51.975572 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401201-6qr5x" event={"ID":"6e578ce4-062a-47d6-ad7e-c1e36d257077","Type":"ContainerStarted","Data":"56e62bfda8ce3d477efba908fd4b71dce148510b11374c14eed3d97c8f2a3c44"} Nov 25 12:01:51 crc kubenswrapper[4706]: I1125 12:01:51.982220 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-kube-api-access-cvxln" (OuterVolumeSpecName: "kube-api-access-cvxln") pod "ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da" (UID: "ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da"). InnerVolumeSpecName "kube-api-access-cvxln". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:01:51 crc kubenswrapper[4706]: I1125 12:01:51.988219 4706 generic.go:334] "Generic (PLEG): container finished" podID="aee9d90c-4042-4e66-9535-cbc14bc710ec" containerID="599bb0331f0cc5cb3b3dc7964cf325fbdd2532c3ca3514db1208c0b3fb01ff9f" exitCode=0 Nov 25 12:01:51 crc kubenswrapper[4706]: I1125 12:01:51.988464 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqn5d" event={"ID":"aee9d90c-4042-4e66-9535-cbc14bc710ec","Type":"ContainerDied","Data":"599bb0331f0cc5cb3b3dc7964cf325fbdd2532c3ca3514db1208c0b3fb01ff9f"} Nov 25 12:01:52 crc kubenswrapper[4706]: I1125 12:01:52.064588 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z6ffp" podStartSLOduration=4.082050974 podStartE2EDuration="7.064567943s" podCreationTimestamp="2025-11-25 12:01:45 +0000 UTC" firstStartedPulling="2025-11-25 12:01:46.866362171 +0000 UTC m=+1515.780919552" lastFinishedPulling="2025-11-25 12:01:49.84887914 +0000 UTC m=+1518.763436521" observedRunningTime="2025-11-25 12:01:52.063217438 +0000 UTC m=+1520.977774829" watchObservedRunningTime="2025-11-25 12:01:52.064567943 +0000 UTC m=+1520.979125324" Nov 25 12:01:52 crc kubenswrapper[4706]: I1125 12:01:52.069589 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvxln\" (UniqueName: \"kubernetes.io/projected/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-kube-api-access-cvxln\") on node \"crc\" DevicePath \"\"" Nov 25 12:01:52 crc kubenswrapper[4706]: I1125 12:01:52.069664 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-config" (OuterVolumeSpecName: "config") pod "ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da" (UID: "ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:01:52 crc kubenswrapper[4706]: I1125 12:01:52.079609 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da" (UID: "ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:01:52 crc kubenswrapper[4706]: I1125 12:01:52.101864 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da" (UID: "ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:01:52 crc kubenswrapper[4706]: I1125 12:01:52.101916 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da" (UID: "ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:01:52 crc kubenswrapper[4706]: I1125 12:01:52.103712 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da" (UID: "ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:01:52 crc kubenswrapper[4706]: I1125 12:01:52.135774 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da" (UID: "ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:01:52 crc kubenswrapper[4706]: I1125 12:01:52.137585 4706 scope.go:117] "RemoveContainer" containerID="4b42b744f794227f955c967e30a0ccb8dc7f089fce42817e02e89f7e0a3dfaed" Nov 25 12:01:52 crc kubenswrapper[4706]: I1125 12:01:52.169544 4706 scope.go:117] "RemoveContainer" containerID="7e431cb9c6bac547fc698b4496940ce1908f0b85b3d947d5dbee648b33a819c9" Nov 25 12:01:52 crc kubenswrapper[4706]: E1125 12:01:52.170098 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e431cb9c6bac547fc698b4496940ce1908f0b85b3d947d5dbee648b33a819c9\": container with ID starting with 7e431cb9c6bac547fc698b4496940ce1908f0b85b3d947d5dbee648b33a819c9 not found: ID does not exist" containerID="7e431cb9c6bac547fc698b4496940ce1908f0b85b3d947d5dbee648b33a819c9" Nov 25 12:01:52 crc kubenswrapper[4706]: I1125 12:01:52.170137 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e431cb9c6bac547fc698b4496940ce1908f0b85b3d947d5dbee648b33a819c9"} err="failed to get container status \"7e431cb9c6bac547fc698b4496940ce1908f0b85b3d947d5dbee648b33a819c9\": rpc error: code = NotFound desc = could not find container \"7e431cb9c6bac547fc698b4496940ce1908f0b85b3d947d5dbee648b33a819c9\": container with ID starting with 7e431cb9c6bac547fc698b4496940ce1908f0b85b3d947d5dbee648b33a819c9 not found: ID does not exist" Nov 25 12:01:52 crc kubenswrapper[4706]: I1125 12:01:52.170167 4706 scope.go:117] "RemoveContainer" containerID="4b42b744f794227f955c967e30a0ccb8dc7f089fce42817e02e89f7e0a3dfaed" Nov 25 12:01:52 crc kubenswrapper[4706]: E1125 12:01:52.171601 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b42b744f794227f955c967e30a0ccb8dc7f089fce42817e02e89f7e0a3dfaed\": container with ID starting with 4b42b744f794227f955c967e30a0ccb8dc7f089fce42817e02e89f7e0a3dfaed not found: ID does not exist" containerID="4b42b744f794227f955c967e30a0ccb8dc7f089fce42817e02e89f7e0a3dfaed" Nov 25 12:01:52 crc kubenswrapper[4706]: I1125 12:01:52.171655 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b42b744f794227f955c967e30a0ccb8dc7f089fce42817e02e89f7e0a3dfaed"} err="failed to get container status \"4b42b744f794227f955c967e30a0ccb8dc7f089fce42817e02e89f7e0a3dfaed\": rpc error: code = NotFound desc = could not find container \"4b42b744f794227f955c967e30a0ccb8dc7f089fce42817e02e89f7e0a3dfaed\": container with ID starting with 4b42b744f794227f955c967e30a0ccb8dc7f089fce42817e02e89f7e0a3dfaed not found: ID does not exist" Nov 25 12:01:52 crc kubenswrapper[4706]: I1125 12:01:52.172695 4706 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 12:01:52 crc kubenswrapper[4706]: I1125 12:01:52.172718 4706 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 12:01:52 crc kubenswrapper[4706]: I1125 12:01:52.172730 4706 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:01:52 crc kubenswrapper[4706]: I1125 12:01:52.172738 4706 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:01:52 crc kubenswrapper[4706]: I1125 12:01:52.172749 4706 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 12:01:52 crc kubenswrapper[4706]: I1125 12:01:52.172757 4706 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 25 12:01:52 crc kubenswrapper[4706]: I1125 12:01:52.299793 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-9s22r"] Nov 25 12:01:52 crc kubenswrapper[4706]: I1125 12:01:52.311749 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-9s22r"] Nov 25 12:01:53 crc kubenswrapper[4706]: I1125 12:01:53.025284 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401201-6qr5x" event={"ID":"6e578ce4-062a-47d6-ad7e-c1e36d257077","Type":"ContainerStarted","Data":"9194fe1e80a321af8d7dea100a0849b25e600c5a34e2ea4147a524f5c4dae0f3"} Nov 25 12:01:53 crc kubenswrapper[4706]: I1125 12:01:53.063282 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29401201-6qr5x" podStartSLOduration=3.063262149 podStartE2EDuration="3.063262149s" podCreationTimestamp="2025-11-25 12:01:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:01:53.059133975 +0000 UTC m=+1521.973691356" watchObservedRunningTime="2025-11-25 12:01:53.063262149 +0000 UTC m=+1521.977819530" Nov 25 12:01:53 crc kubenswrapper[4706]: I1125 12:01:53.151057 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xqn5d" Nov 25 12:01:53 crc kubenswrapper[4706]: I1125 12:01:53.301560 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aee9d90c-4042-4e66-9535-cbc14bc710ec-utilities\") pod \"aee9d90c-4042-4e66-9535-cbc14bc710ec\" (UID: \"aee9d90c-4042-4e66-9535-cbc14bc710ec\") " Nov 25 12:01:53 crc kubenswrapper[4706]: I1125 12:01:53.301645 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aee9d90c-4042-4e66-9535-cbc14bc710ec-catalog-content\") pod \"aee9d90c-4042-4e66-9535-cbc14bc710ec\" (UID: \"aee9d90c-4042-4e66-9535-cbc14bc710ec\") " Nov 25 12:01:53 crc kubenswrapper[4706]: I1125 12:01:53.301848 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j98t6\" (UniqueName: \"kubernetes.io/projected/aee9d90c-4042-4e66-9535-cbc14bc710ec-kube-api-access-j98t6\") pod \"aee9d90c-4042-4e66-9535-cbc14bc710ec\" (UID: \"aee9d90c-4042-4e66-9535-cbc14bc710ec\") " Nov 25 12:01:53 crc kubenswrapper[4706]: I1125 12:01:53.302583 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aee9d90c-4042-4e66-9535-cbc14bc710ec-utilities" (OuterVolumeSpecName: "utilities") pod "aee9d90c-4042-4e66-9535-cbc14bc710ec" (UID: "aee9d90c-4042-4e66-9535-cbc14bc710ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:01:53 crc kubenswrapper[4706]: I1125 12:01:53.307862 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aee9d90c-4042-4e66-9535-cbc14bc710ec-kube-api-access-j98t6" (OuterVolumeSpecName: "kube-api-access-j98t6") pod "aee9d90c-4042-4e66-9535-cbc14bc710ec" (UID: "aee9d90c-4042-4e66-9535-cbc14bc710ec"). InnerVolumeSpecName "kube-api-access-j98t6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:01:53 crc kubenswrapper[4706]: I1125 12:01:53.404871 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aee9d90c-4042-4e66-9535-cbc14bc710ec-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:01:53 crc kubenswrapper[4706]: I1125 12:01:53.405080 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j98t6\" (UniqueName: \"kubernetes.io/projected/aee9d90c-4042-4e66-9535-cbc14bc710ec-kube-api-access-j98t6\") on node \"crc\" DevicePath \"\"" Nov 25 12:01:53 crc kubenswrapper[4706]: I1125 12:01:53.874177 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aee9d90c-4042-4e66-9535-cbc14bc710ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aee9d90c-4042-4e66-9535-cbc14bc710ec" (UID: "aee9d90c-4042-4e66-9535-cbc14bc710ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:01:53 crc kubenswrapper[4706]: I1125 12:01:53.913809 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aee9d90c-4042-4e66-9535-cbc14bc710ec-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:01:53 crc kubenswrapper[4706]: I1125 12:01:53.934146 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da" path="/var/lib/kubelet/pods/ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da/volumes" Nov 25 12:01:54 crc kubenswrapper[4706]: I1125 12:01:54.047661 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqn5d" event={"ID":"aee9d90c-4042-4e66-9535-cbc14bc710ec","Type":"ContainerDied","Data":"44d3f30b5e9ae492b6b1495383c6bf93d074ac20050c8023f906f05856ab0a9e"} Nov 25 12:01:54 crc kubenswrapper[4706]: I1125 12:01:54.047742 4706 scope.go:117] "RemoveContainer" containerID="599bb0331f0cc5cb3b3dc7964cf325fbdd2532c3ca3514db1208c0b3fb01ff9f" Nov 25 12:01:54 crc kubenswrapper[4706]: I1125 12:01:54.048397 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xqn5d" Nov 25 12:01:54 crc kubenswrapper[4706]: I1125 12:01:54.075549 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xqn5d"] Nov 25 12:01:54 crc kubenswrapper[4706]: I1125 12:01:54.086853 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xqn5d"] Nov 25 12:01:55 crc kubenswrapper[4706]: I1125 12:01:55.059764 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jq8c8" event={"ID":"272a2de0-ac52-46e5-aa78-569b642ad4bb","Type":"ContainerStarted","Data":"7775c45240a2a4bc223c9757202524614cc395ad3e71b37b44974d8d6a8515f3"} Nov 25 12:01:55 crc kubenswrapper[4706]: I1125 12:01:55.628347 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z6ffp" Nov 25 12:01:55 crc kubenswrapper[4706]: I1125 12:01:55.629633 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z6ffp" Nov 25 12:01:55 crc kubenswrapper[4706]: I1125 12:01:55.952741 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aee9d90c-4042-4e66-9535-cbc14bc710ec" path="/var/lib/kubelet/pods/aee9d90c-4042-4e66-9535-cbc14bc710ec/volumes" Nov 25 12:01:56 crc kubenswrapper[4706]: I1125 12:01:56.082390 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ctxms"] Nov 25 12:01:56 crc kubenswrapper[4706]: E1125 12:01:56.082899 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aee9d90c-4042-4e66-9535-cbc14bc710ec" containerName="extract-content" Nov 25 12:01:56 crc kubenswrapper[4706]: I1125 12:01:56.082914 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="aee9d90c-4042-4e66-9535-cbc14bc710ec" containerName="extract-content" Nov 25 12:01:56 crc kubenswrapper[4706]: E1125 12:01:56.082934 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da" containerName="init" Nov 25 12:01:56 crc kubenswrapper[4706]: I1125 12:01:56.082944 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da" containerName="init" Nov 25 12:01:56 crc kubenswrapper[4706]: E1125 12:01:56.082980 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aee9d90c-4042-4e66-9535-cbc14bc710ec" containerName="extract-utilities" Nov 25 12:01:56 crc kubenswrapper[4706]: I1125 12:01:56.082993 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="aee9d90c-4042-4e66-9535-cbc14bc710ec" containerName="extract-utilities" Nov 25 12:01:56 crc kubenswrapper[4706]: E1125 12:01:56.083011 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aee9d90c-4042-4e66-9535-cbc14bc710ec" containerName="registry-server" Nov 25 12:01:56 crc kubenswrapper[4706]: I1125 12:01:56.083020 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="aee9d90c-4042-4e66-9535-cbc14bc710ec" containerName="registry-server" Nov 25 12:01:56 crc kubenswrapper[4706]: E1125 12:01:56.083035 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da" containerName="dnsmasq-dns" Nov 25 12:01:56 crc kubenswrapper[4706]: I1125 12:01:56.083043 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da" containerName="dnsmasq-dns" Nov 25 12:01:56 crc kubenswrapper[4706]: I1125 12:01:56.083335 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccc8ee8d-5b46-49fa-b797-f4ae80cfe5da" containerName="dnsmasq-dns" Nov 25 12:01:56 crc kubenswrapper[4706]: I1125 12:01:56.083365 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="aee9d90c-4042-4e66-9535-cbc14bc710ec" containerName="registry-server" Nov 25 12:01:56 crc kubenswrapper[4706]: I1125 12:01:56.085391 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ctxms" Nov 25 12:01:56 crc kubenswrapper[4706]: I1125 12:01:56.106225 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ctxms"] Nov 25 12:01:56 crc kubenswrapper[4706]: I1125 12:01:56.161936 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/120d443a-be03-4d0e-a6a2-0d03ed708ba3-catalog-content\") pod \"redhat-operators-ctxms\" (UID: \"120d443a-be03-4d0e-a6a2-0d03ed708ba3\") " pod="openshift-marketplace/redhat-operators-ctxms" Nov 25 12:01:56 crc kubenswrapper[4706]: I1125 12:01:56.162068 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89gvx\" (UniqueName: \"kubernetes.io/projected/120d443a-be03-4d0e-a6a2-0d03ed708ba3-kube-api-access-89gvx\") pod \"redhat-operators-ctxms\" (UID: \"120d443a-be03-4d0e-a6a2-0d03ed708ba3\") " pod="openshift-marketplace/redhat-operators-ctxms" Nov 25 12:01:56 crc kubenswrapper[4706]: I1125 12:01:56.162151 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/120d443a-be03-4d0e-a6a2-0d03ed708ba3-utilities\") pod \"redhat-operators-ctxms\" (UID: \"120d443a-be03-4d0e-a6a2-0d03ed708ba3\") " pod="openshift-marketplace/redhat-operators-ctxms" Nov 25 12:01:56 crc kubenswrapper[4706]: I1125 12:01:56.264175 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/120d443a-be03-4d0e-a6a2-0d03ed708ba3-utilities\") pod \"redhat-operators-ctxms\" (UID: \"120d443a-be03-4d0e-a6a2-0d03ed708ba3\") " pod="openshift-marketplace/redhat-operators-ctxms" Nov 25 12:01:56 crc kubenswrapper[4706]: I1125 12:01:56.264284 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/120d443a-be03-4d0e-a6a2-0d03ed708ba3-catalog-content\") pod \"redhat-operators-ctxms\" (UID: \"120d443a-be03-4d0e-a6a2-0d03ed708ba3\") " pod="openshift-marketplace/redhat-operators-ctxms" Nov 25 12:01:56 crc kubenswrapper[4706]: I1125 12:01:56.264404 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89gvx\" (UniqueName: \"kubernetes.io/projected/120d443a-be03-4d0e-a6a2-0d03ed708ba3-kube-api-access-89gvx\") pod \"redhat-operators-ctxms\" (UID: \"120d443a-be03-4d0e-a6a2-0d03ed708ba3\") " pod="openshift-marketplace/redhat-operators-ctxms" Nov 25 12:01:56 crc kubenswrapper[4706]: I1125 12:01:56.264673 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/120d443a-be03-4d0e-a6a2-0d03ed708ba3-utilities\") pod \"redhat-operators-ctxms\" (UID: \"120d443a-be03-4d0e-a6a2-0d03ed708ba3\") " pod="openshift-marketplace/redhat-operators-ctxms" Nov 25 12:01:56 crc kubenswrapper[4706]: I1125 12:01:56.264914 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/120d443a-be03-4d0e-a6a2-0d03ed708ba3-catalog-content\") pod \"redhat-operators-ctxms\" (UID: \"120d443a-be03-4d0e-a6a2-0d03ed708ba3\") " pod="openshift-marketplace/redhat-operators-ctxms" Nov 25 12:01:56 crc kubenswrapper[4706]: I1125 12:01:56.288974 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89gvx\" (UniqueName: \"kubernetes.io/projected/120d443a-be03-4d0e-a6a2-0d03ed708ba3-kube-api-access-89gvx\") pod \"redhat-operators-ctxms\" (UID: \"120d443a-be03-4d0e-a6a2-0d03ed708ba3\") " pod="openshift-marketplace/redhat-operators-ctxms" Nov 25 12:01:56 crc kubenswrapper[4706]: I1125 12:01:56.407569 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ctxms" Nov 25 12:01:56 crc kubenswrapper[4706]: I1125 12:01:56.680448 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-z6ffp" podUID="47918fcd-d027-4db2-8964-4dbe4fb179f8" containerName="registry-server" probeResult="failure" output=< Nov 25 12:01:56 crc kubenswrapper[4706]: timeout: failed to connect service ":50051" within 1s Nov 25 12:01:56 crc kubenswrapper[4706]: > Nov 25 12:01:57 crc kubenswrapper[4706]: I1125 12:01:57.111143 4706 generic.go:334] "Generic (PLEG): container finished" podID="6e578ce4-062a-47d6-ad7e-c1e36d257077" containerID="9194fe1e80a321af8d7dea100a0849b25e600c5a34e2ea4147a524f5c4dae0f3" exitCode=0 Nov 25 12:01:57 crc kubenswrapper[4706]: I1125 12:01:57.111225 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401201-6qr5x" event={"ID":"6e578ce4-062a-47d6-ad7e-c1e36d257077","Type":"ContainerDied","Data":"9194fe1e80a321af8d7dea100a0849b25e600c5a34e2ea4147a524f5c4dae0f3"} Nov 25 12:01:57 crc kubenswrapper[4706]: I1125 12:01:57.176308 4706 scope.go:117] "RemoveContainer" containerID="3a455349329ed70a60400c2655366076c791cc3d7d9672909906b441039b7c0c" Nov 25 12:01:57 crc kubenswrapper[4706]: I1125 12:01:57.197582 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jq8c8" podStartSLOduration=6.300020257 podStartE2EDuration="10.197564271s" podCreationTimestamp="2025-11-25 12:01:47 +0000 UTC" firstStartedPulling="2025-11-25 12:01:48.884128979 +0000 UTC m=+1517.798686360" lastFinishedPulling="2025-11-25 12:01:52.781672993 +0000 UTC m=+1521.696230374" observedRunningTime="2025-11-25 12:01:57.155994233 +0000 UTC m=+1526.070551624" watchObservedRunningTime="2025-11-25 12:01:57.197564271 +0000 UTC m=+1526.112121652" Nov 25 12:01:57 crc kubenswrapper[4706]: I1125 12:01:57.331946 4706 scope.go:117] "RemoveContainer" containerID="0be681322f62239c2dd15bba4134fb598ff81dc6202ca37487612e18681a10cf" Nov 25 12:01:57 crc kubenswrapper[4706]: I1125 12:01:57.419989 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jq8c8" Nov 25 12:01:57 crc kubenswrapper[4706]: I1125 12:01:57.420142 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jq8c8" Nov 25 12:01:58 crc kubenswrapper[4706]: I1125 12:01:58.067031 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ctxms"] Nov 25 12:01:58 crc kubenswrapper[4706]: I1125 12:01:58.123218 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v4w8c" event={"ID":"2ee6af69-6304-4a7f-bfae-9e73272ce951","Type":"ContainerStarted","Data":"4b542dd8549bbc7790762bc2bc2bdf7eeb3699a8d3a84560f1173658325b8b4a"} Nov 25 12:01:58 crc kubenswrapper[4706]: I1125 12:01:58.128665 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ctxms" event={"ID":"120d443a-be03-4d0e-a6a2-0d03ed708ba3","Type":"ContainerStarted","Data":"8780592c697b3d97c1cc7c2854d6503d2b604149109d0f3b0ce2339cfcd10c79"} Nov 25 12:01:58 crc kubenswrapper[4706]: I1125 12:01:58.375828 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401201-6qr5x" Nov 25 12:01:58 crc kubenswrapper[4706]: I1125 12:01:58.467274 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-jq8c8" podUID="272a2de0-ac52-46e5-aa78-569b642ad4bb" containerName="registry-server" probeResult="failure" output=< Nov 25 12:01:58 crc kubenswrapper[4706]: timeout: failed to connect service ":50051" within 1s Nov 25 12:01:58 crc kubenswrapper[4706]: > Nov 25 12:01:58 crc kubenswrapper[4706]: I1125 12:01:58.506488 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6e578ce4-062a-47d6-ad7e-c1e36d257077-fernet-keys\") pod \"6e578ce4-062a-47d6-ad7e-c1e36d257077\" (UID: \"6e578ce4-062a-47d6-ad7e-c1e36d257077\") " Nov 25 12:01:58 crc kubenswrapper[4706]: I1125 12:01:58.506729 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e578ce4-062a-47d6-ad7e-c1e36d257077-config-data\") pod \"6e578ce4-062a-47d6-ad7e-c1e36d257077\" (UID: \"6e578ce4-062a-47d6-ad7e-c1e36d257077\") " Nov 25 12:01:58 crc kubenswrapper[4706]: I1125 12:01:58.506898 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz9jg\" (UniqueName: \"kubernetes.io/projected/6e578ce4-062a-47d6-ad7e-c1e36d257077-kube-api-access-mz9jg\") pod \"6e578ce4-062a-47d6-ad7e-c1e36d257077\" (UID: \"6e578ce4-062a-47d6-ad7e-c1e36d257077\") " Nov 25 12:01:58 crc kubenswrapper[4706]: I1125 12:01:58.506999 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e578ce4-062a-47d6-ad7e-c1e36d257077-combined-ca-bundle\") pod \"6e578ce4-062a-47d6-ad7e-c1e36d257077\" (UID: \"6e578ce4-062a-47d6-ad7e-c1e36d257077\") " Nov 25 12:01:58 crc kubenswrapper[4706]: I1125 12:01:58.512047 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e578ce4-062a-47d6-ad7e-c1e36d257077-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6e578ce4-062a-47d6-ad7e-c1e36d257077" (UID: "6e578ce4-062a-47d6-ad7e-c1e36d257077"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:01:58 crc kubenswrapper[4706]: I1125 12:01:58.512328 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e578ce4-062a-47d6-ad7e-c1e36d257077-kube-api-access-mz9jg" (OuterVolumeSpecName: "kube-api-access-mz9jg") pod "6e578ce4-062a-47d6-ad7e-c1e36d257077" (UID: "6e578ce4-062a-47d6-ad7e-c1e36d257077"). InnerVolumeSpecName "kube-api-access-mz9jg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:01:58 crc kubenswrapper[4706]: I1125 12:01:58.541999 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e578ce4-062a-47d6-ad7e-c1e36d257077-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e578ce4-062a-47d6-ad7e-c1e36d257077" (UID: "6e578ce4-062a-47d6-ad7e-c1e36d257077"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:01:58 crc kubenswrapper[4706]: I1125 12:01:58.587437 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e578ce4-062a-47d6-ad7e-c1e36d257077-config-data" (OuterVolumeSpecName: "config-data") pod "6e578ce4-062a-47d6-ad7e-c1e36d257077" (UID: "6e578ce4-062a-47d6-ad7e-c1e36d257077"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:01:58 crc kubenswrapper[4706]: I1125 12:01:58.610575 4706 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6e578ce4-062a-47d6-ad7e-c1e36d257077-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 25 12:01:58 crc kubenswrapper[4706]: I1125 12:01:58.610621 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e578ce4-062a-47d6-ad7e-c1e36d257077-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:01:58 crc kubenswrapper[4706]: I1125 12:01:58.610645 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mz9jg\" (UniqueName: \"kubernetes.io/projected/6e578ce4-062a-47d6-ad7e-c1e36d257077-kube-api-access-mz9jg\") on node \"crc\" DevicePath \"\"" Nov 25 12:01:58 crc kubenswrapper[4706]: I1125 12:01:58.610660 4706 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e578ce4-062a-47d6-ad7e-c1e36d257077-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:01:59 crc kubenswrapper[4706]: I1125 12:01:59.142807 4706 generic.go:334] "Generic (PLEG): container finished" podID="120d443a-be03-4d0e-a6a2-0d03ed708ba3" containerID="8afd3b132510fa408fe9743db721f11892a455cfef223930966f90271a44312e" exitCode=0 Nov 25 12:01:59 crc kubenswrapper[4706]: I1125 12:01:59.142928 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ctxms" event={"ID":"120d443a-be03-4d0e-a6a2-0d03ed708ba3","Type":"ContainerDied","Data":"8afd3b132510fa408fe9743db721f11892a455cfef223930966f90271a44312e"} Nov 25 12:01:59 crc kubenswrapper[4706]: I1125 12:01:59.151367 4706 generic.go:334] "Generic (PLEG): container finished" podID="2ee6af69-6304-4a7f-bfae-9e73272ce951" containerID="4b542dd8549bbc7790762bc2bc2bdf7eeb3699a8d3a84560f1173658325b8b4a" exitCode=0 Nov 25 12:01:59 crc kubenswrapper[4706]: I1125 12:01:59.151396 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v4w8c" event={"ID":"2ee6af69-6304-4a7f-bfae-9e73272ce951","Type":"ContainerDied","Data":"4b542dd8549bbc7790762bc2bc2bdf7eeb3699a8d3a84560f1173658325b8b4a"} Nov 25 12:01:59 crc kubenswrapper[4706]: I1125 12:01:59.157203 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401201-6qr5x" event={"ID":"6e578ce4-062a-47d6-ad7e-c1e36d257077","Type":"ContainerDied","Data":"56e62bfda8ce3d477efba908fd4b71dce148510b11374c14eed3d97c8f2a3c44"} Nov 25 12:01:59 crc kubenswrapper[4706]: I1125 12:01:59.157241 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401201-6qr5x" Nov 25 12:01:59 crc kubenswrapper[4706]: I1125 12:01:59.157250 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56e62bfda8ce3d477efba908fd4b71dce148510b11374c14eed3d97c8f2a3c44" Nov 25 12:02:01 crc kubenswrapper[4706]: I1125 12:02:01.125058 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:02:01 crc kubenswrapper[4706]: I1125 12:02:01.125675 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:02:02 crc kubenswrapper[4706]: I1125 12:02:02.190139 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v4w8c" event={"ID":"2ee6af69-6304-4a7f-bfae-9e73272ce951","Type":"ContainerStarted","Data":"d5fd2f826df8fa3a76559d110ce0854768023982e4301ba7497f66b407f6cf6d"} Nov 25 12:02:02 crc kubenswrapper[4706]: I1125 12:02:02.192351 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ctxms" event={"ID":"120d443a-be03-4d0e-a6a2-0d03ed708ba3","Type":"ContainerStarted","Data":"308b423ab01a467a7695d8132b2489ceda4c7d313a9d453e3da21a3958aa54b3"} Nov 25 12:02:02 crc kubenswrapper[4706]: I1125 12:02:02.216803 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v4w8c" podStartSLOduration=4.319758071 podStartE2EDuration="13.21678526s" podCreationTimestamp="2025-11-25 12:01:49 +0000 UTC" firstStartedPulling="2025-11-25 12:01:51.973889327 +0000 UTC m=+1520.888446698" lastFinishedPulling="2025-11-25 12:02:00.870916506 +0000 UTC m=+1529.785473887" observedRunningTime="2025-11-25 12:02:02.211446086 +0000 UTC m=+1531.126003487" watchObservedRunningTime="2025-11-25 12:02:02.21678526 +0000 UTC m=+1531.131342651" Nov 25 12:02:04 crc kubenswrapper[4706]: I1125 12:02:04.413608 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw"] Nov 25 12:02:04 crc kubenswrapper[4706]: E1125 12:02:04.414430 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e578ce4-062a-47d6-ad7e-c1e36d257077" containerName="keystone-cron" Nov 25 12:02:04 crc kubenswrapper[4706]: I1125 12:02:04.414447 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e578ce4-062a-47d6-ad7e-c1e36d257077" containerName="keystone-cron" Nov 25 12:02:04 crc kubenswrapper[4706]: I1125 12:02:04.414704 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e578ce4-062a-47d6-ad7e-c1e36d257077" containerName="keystone-cron" Nov 25 12:02:04 crc kubenswrapper[4706]: I1125 12:02:04.415619 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw" Nov 25 12:02:04 crc kubenswrapper[4706]: I1125 12:02:04.418602 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 12:02:04 crc kubenswrapper[4706]: I1125 12:02:04.419448 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:02:04 crc kubenswrapper[4706]: I1125 12:02:04.419840 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 12:02:04 crc kubenswrapper[4706]: I1125 12:02:04.420019 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r8qqp" Nov 25 12:02:04 crc kubenswrapper[4706]: I1125 12:02:04.431457 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw"] Nov 25 12:02:04 crc kubenswrapper[4706]: I1125 12:02:04.518827 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm9sm\" (UniqueName: \"kubernetes.io/projected/e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3-kube-api-access-bm9sm\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw\" (UID: \"e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw" Nov 25 12:02:04 crc kubenswrapper[4706]: I1125 12:02:04.518924 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw\" (UID: \"e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw" Nov 25 12:02:04 crc kubenswrapper[4706]: I1125 12:02:04.518955 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw\" (UID: \"e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw" Nov 25 12:02:04 crc kubenswrapper[4706]: I1125 12:02:04.518991 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw\" (UID: \"e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw" Nov 25 12:02:04 crc kubenswrapper[4706]: I1125 12:02:04.620514 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm9sm\" (UniqueName: \"kubernetes.io/projected/e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3-kube-api-access-bm9sm\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw\" (UID: \"e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw" Nov 25 12:02:04 crc kubenswrapper[4706]: I1125 12:02:04.620598 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw\" (UID: \"e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw" Nov 25 12:02:04 crc kubenswrapper[4706]: I1125 12:02:04.620622 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw\" (UID: \"e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw" Nov 25 12:02:04 crc kubenswrapper[4706]: I1125 12:02:04.620650 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw\" (UID: \"e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw" Nov 25 12:02:04 crc kubenswrapper[4706]: I1125 12:02:04.626606 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw\" (UID: \"e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw" Nov 25 12:02:04 crc kubenswrapper[4706]: I1125 12:02:04.627887 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw\" (UID: \"e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw" Nov 25 12:02:04 crc kubenswrapper[4706]: I1125 12:02:04.629748 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw\" (UID: \"e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw" Nov 25 12:02:04 crc kubenswrapper[4706]: I1125 12:02:04.640593 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm9sm\" (UniqueName: \"kubernetes.io/projected/e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3-kube-api-access-bm9sm\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw\" (UID: \"e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw" Nov 25 12:02:04 crc kubenswrapper[4706]: I1125 12:02:04.765762 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw" Nov 25 12:02:05 crc kubenswrapper[4706]: I1125 12:02:05.391592 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw"] Nov 25 12:02:06 crc kubenswrapper[4706]: I1125 12:02:06.247512 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw" event={"ID":"e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3","Type":"ContainerStarted","Data":"78766f1c953168bcb9ea3c686c28d134456bf3fd455bf61a45bb524453e7ff4a"} Nov 25 12:02:06 crc kubenswrapper[4706]: I1125 12:02:06.713404 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-z6ffp" podUID="47918fcd-d027-4db2-8964-4dbe4fb179f8" containerName="registry-server" probeResult="failure" output=< Nov 25 12:02:06 crc kubenswrapper[4706]: timeout: failed to connect service ":50051" within 1s Nov 25 12:02:06 crc kubenswrapper[4706]: > Nov 25 12:02:07 crc kubenswrapper[4706]: I1125 12:02:07.501809 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jq8c8" Nov 25 12:02:07 crc kubenswrapper[4706]: I1125 12:02:07.562691 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jq8c8" Nov 25 12:02:07 crc kubenswrapper[4706]: I1125 12:02:07.769207 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jq8c8"] Nov 25 12:02:09 crc kubenswrapper[4706]: I1125 12:02:09.284331 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jq8c8" podUID="272a2de0-ac52-46e5-aa78-569b642ad4bb" containerName="registry-server" containerID="cri-o://7775c45240a2a4bc223c9757202524614cc395ad3e71b37b44974d8d6a8515f3" gracePeriod=2 Nov 25 12:02:09 crc kubenswrapper[4706]: I1125 12:02:09.438507 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v4w8c" Nov 25 12:02:09 crc kubenswrapper[4706]: I1125 12:02:09.439143 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v4w8c" Nov 25 12:02:09 crc kubenswrapper[4706]: I1125 12:02:09.500646 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v4w8c" Nov 25 12:02:10 crc kubenswrapper[4706]: I1125 12:02:10.359428 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v4w8c" Nov 25 12:02:11 crc kubenswrapper[4706]: I1125 12:02:11.161851 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v4w8c"] Nov 25 12:02:11 crc kubenswrapper[4706]: I1125 12:02:11.312114 4706 generic.go:334] "Generic (PLEG): container finished" podID="272a2de0-ac52-46e5-aa78-569b642ad4bb" containerID="7775c45240a2a4bc223c9757202524614cc395ad3e71b37b44974d8d6a8515f3" exitCode=0 Nov 25 12:02:11 crc kubenswrapper[4706]: I1125 12:02:11.312203 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jq8c8" event={"ID":"272a2de0-ac52-46e5-aa78-569b642ad4bb","Type":"ContainerDied","Data":"7775c45240a2a4bc223c9757202524614cc395ad3e71b37b44974d8d6a8515f3"} Nov 25 12:02:12 crc kubenswrapper[4706]: I1125 12:02:12.266679 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jq8c8" Nov 25 12:02:12 crc kubenswrapper[4706]: I1125 12:02:12.325960 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v4w8c" podUID="2ee6af69-6304-4a7f-bfae-9e73272ce951" containerName="registry-server" containerID="cri-o://d5fd2f826df8fa3a76559d110ce0854768023982e4301ba7497f66b407f6cf6d" gracePeriod=2 Nov 25 12:02:12 crc kubenswrapper[4706]: I1125 12:02:12.326774 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jq8c8" Nov 25 12:02:12 crc kubenswrapper[4706]: I1125 12:02:12.327201 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jq8c8" event={"ID":"272a2de0-ac52-46e5-aa78-569b642ad4bb","Type":"ContainerDied","Data":"bf67c7671ead3e0ba624e4cde60d4c809d7a302c846cda4b7b2a16f8327d79e1"} Nov 25 12:02:12 crc kubenswrapper[4706]: I1125 12:02:12.327233 4706 scope.go:117] "RemoveContainer" containerID="7775c45240a2a4bc223c9757202524614cc395ad3e71b37b44974d8d6a8515f3" Nov 25 12:02:12 crc kubenswrapper[4706]: I1125 12:02:12.392916 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/272a2de0-ac52-46e5-aa78-569b642ad4bb-utilities\") pod \"272a2de0-ac52-46e5-aa78-569b642ad4bb\" (UID: \"272a2de0-ac52-46e5-aa78-569b642ad4bb\") " Nov 25 12:02:12 crc kubenswrapper[4706]: I1125 12:02:12.393157 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/272a2de0-ac52-46e5-aa78-569b642ad4bb-catalog-content\") pod \"272a2de0-ac52-46e5-aa78-569b642ad4bb\" (UID: \"272a2de0-ac52-46e5-aa78-569b642ad4bb\") " Nov 25 12:02:12 crc kubenswrapper[4706]: I1125 12:02:12.393242 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nthjc\" (UniqueName: \"kubernetes.io/projected/272a2de0-ac52-46e5-aa78-569b642ad4bb-kube-api-access-nthjc\") pod \"272a2de0-ac52-46e5-aa78-569b642ad4bb\" (UID: \"272a2de0-ac52-46e5-aa78-569b642ad4bb\") " Nov 25 12:02:12 crc kubenswrapper[4706]: I1125 12:02:12.393930 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/272a2de0-ac52-46e5-aa78-569b642ad4bb-utilities" (OuterVolumeSpecName: "utilities") pod "272a2de0-ac52-46e5-aa78-569b642ad4bb" (UID: "272a2de0-ac52-46e5-aa78-569b642ad4bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:02:12 crc kubenswrapper[4706]: I1125 12:02:12.402391 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/272a2de0-ac52-46e5-aa78-569b642ad4bb-kube-api-access-nthjc" (OuterVolumeSpecName: "kube-api-access-nthjc") pod "272a2de0-ac52-46e5-aa78-569b642ad4bb" (UID: "272a2de0-ac52-46e5-aa78-569b642ad4bb"). InnerVolumeSpecName "kube-api-access-nthjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:02:12 crc kubenswrapper[4706]: I1125 12:02:12.495799 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/272a2de0-ac52-46e5-aa78-569b642ad4bb-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:02:12 crc kubenswrapper[4706]: I1125 12:02:12.495827 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nthjc\" (UniqueName: \"kubernetes.io/projected/272a2de0-ac52-46e5-aa78-569b642ad4bb-kube-api-access-nthjc\") on node \"crc\" DevicePath \"\"" Nov 25 12:02:12 crc kubenswrapper[4706]: I1125 12:02:12.780869 4706 scope.go:117] "RemoveContainer" containerID="b8bb8e5d05e0fab7c54f16cb26abc00fc80304ccb9025b748cffa1f851061ee2" Nov 25 12:02:12 crc kubenswrapper[4706]: I1125 12:02:12.819392 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/272a2de0-ac52-46e5-aa78-569b642ad4bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "272a2de0-ac52-46e5-aa78-569b642ad4bb" (UID: "272a2de0-ac52-46e5-aa78-569b642ad4bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:02:12 crc kubenswrapper[4706]: I1125 12:02:12.825244 4706 scope.go:117] "RemoveContainer" containerID="bec8ba699cab9b008cb3f499e9dfe7d52b12842dc86102ca750a0c6d41c5869e" Nov 25 12:02:12 crc kubenswrapper[4706]: I1125 12:02:12.904650 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/272a2de0-ac52-46e5-aa78-569b642ad4bb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:02:12 crc kubenswrapper[4706]: I1125 12:02:12.966221 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jq8c8"] Nov 25 12:02:12 crc kubenswrapper[4706]: I1125 12:02:12.977957 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jq8c8"] Nov 25 12:02:13 crc kubenswrapper[4706]: I1125 12:02:13.936907 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="272a2de0-ac52-46e5-aa78-569b642ad4bb" path="/var/lib/kubelet/pods/272a2de0-ac52-46e5-aa78-569b642ad4bb/volumes" Nov 25 12:02:15 crc kubenswrapper[4706]: I1125 12:02:15.358259 4706 generic.go:334] "Generic (PLEG): container finished" podID="2ee6af69-6304-4a7f-bfae-9e73272ce951" containerID="d5fd2f826df8fa3a76559d110ce0854768023982e4301ba7497f66b407f6cf6d" exitCode=0 Nov 25 12:02:15 crc kubenswrapper[4706]: I1125 12:02:15.358339 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v4w8c" event={"ID":"2ee6af69-6304-4a7f-bfae-9e73272ce951","Type":"ContainerDied","Data":"d5fd2f826df8fa3a76559d110ce0854768023982e4301ba7497f66b407f6cf6d"} Nov 25 12:02:16 crc kubenswrapper[4706]: I1125 12:02:16.372137 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v4w8c" event={"ID":"2ee6af69-6304-4a7f-bfae-9e73272ce951","Type":"ContainerDied","Data":"4be139b9e74965e46620c4beda3f6bfd97b5c48f1ec0f114776c2209df68e55a"} Nov 25 12:02:16 crc kubenswrapper[4706]: I1125 12:02:16.372465 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4be139b9e74965e46620c4beda3f6bfd97b5c48f1ec0f114776c2209df68e55a" Nov 25 12:02:16 crc kubenswrapper[4706]: I1125 12:02:16.374813 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v4w8c" Nov 25 12:02:16 crc kubenswrapper[4706]: I1125 12:02:16.484785 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ee6af69-6304-4a7f-bfae-9e73272ce951-utilities\") pod \"2ee6af69-6304-4a7f-bfae-9e73272ce951\" (UID: \"2ee6af69-6304-4a7f-bfae-9e73272ce951\") " Nov 25 12:02:16 crc kubenswrapper[4706]: I1125 12:02:16.485428 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ee6af69-6304-4a7f-bfae-9e73272ce951-catalog-content\") pod \"2ee6af69-6304-4a7f-bfae-9e73272ce951\" (UID: \"2ee6af69-6304-4a7f-bfae-9e73272ce951\") " Nov 25 12:02:16 crc kubenswrapper[4706]: I1125 12:02:16.485639 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wjxf\" (UniqueName: \"kubernetes.io/projected/2ee6af69-6304-4a7f-bfae-9e73272ce951-kube-api-access-5wjxf\") pod \"2ee6af69-6304-4a7f-bfae-9e73272ce951\" (UID: \"2ee6af69-6304-4a7f-bfae-9e73272ce951\") " Nov 25 12:02:16 crc kubenswrapper[4706]: I1125 12:02:16.486096 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ee6af69-6304-4a7f-bfae-9e73272ce951-utilities" (OuterVolumeSpecName: "utilities") pod "2ee6af69-6304-4a7f-bfae-9e73272ce951" (UID: "2ee6af69-6304-4a7f-bfae-9e73272ce951"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:02:16 crc kubenswrapper[4706]: I1125 12:02:16.499549 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ee6af69-6304-4a7f-bfae-9e73272ce951-kube-api-access-5wjxf" (OuterVolumeSpecName: "kube-api-access-5wjxf") pod "2ee6af69-6304-4a7f-bfae-9e73272ce951" (UID: "2ee6af69-6304-4a7f-bfae-9e73272ce951"). InnerVolumeSpecName "kube-api-access-5wjxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:02:16 crc kubenswrapper[4706]: I1125 12:02:16.589720 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wjxf\" (UniqueName: \"kubernetes.io/projected/2ee6af69-6304-4a7f-bfae-9e73272ce951-kube-api-access-5wjxf\") on node \"crc\" DevicePath \"\"" Nov 25 12:02:16 crc kubenswrapper[4706]: I1125 12:02:16.589797 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ee6af69-6304-4a7f-bfae-9e73272ce951-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:02:16 crc kubenswrapper[4706]: I1125 12:02:16.720199 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-z6ffp" podUID="47918fcd-d027-4db2-8964-4dbe4fb179f8" containerName="registry-server" probeResult="failure" output=< Nov 25 12:02:16 crc kubenswrapper[4706]: timeout: failed to connect service ":50051" within 1s Nov 25 12:02:16 crc kubenswrapper[4706]: > Nov 25 12:02:17 crc kubenswrapper[4706]: I1125 12:02:17.385643 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v4w8c" Nov 25 12:02:21 crc kubenswrapper[4706]: I1125 12:02:21.780846 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ee6af69-6304-4a7f-bfae-9e73272ce951-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ee6af69-6304-4a7f-bfae-9e73272ce951" (UID: "2ee6af69-6304-4a7f-bfae-9e73272ce951"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:02:21 crc kubenswrapper[4706]: I1125 12:02:21.796125 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ee6af69-6304-4a7f-bfae-9e73272ce951-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:02:21 crc kubenswrapper[4706]: I1125 12:02:21.941746 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v4w8c"] Nov 25 12:02:21 crc kubenswrapper[4706]: I1125 12:02:21.941801 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v4w8c"] Nov 25 12:02:23 crc kubenswrapper[4706]: I1125 12:02:23.939231 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ee6af69-6304-4a7f-bfae-9e73272ce951" path="/var/lib/kubelet/pods/2ee6af69-6304-4a7f-bfae-9e73272ce951/volumes" Nov 25 12:02:26 crc kubenswrapper[4706]: I1125 12:02:26.714833 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-z6ffp" podUID="47918fcd-d027-4db2-8964-4dbe4fb179f8" containerName="registry-server" probeResult="failure" output=< Nov 25 12:02:26 crc kubenswrapper[4706]: timeout: failed to connect service ":50051" within 1s Nov 25 12:02:26 crc kubenswrapper[4706]: > Nov 25 12:02:30 crc kubenswrapper[4706]: E1125 12:02:30.114564 4706 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Nov 25 12:02:30 crc kubenswrapper[4706]: E1125 12:02:30.115192 4706 kuberuntime_manager.go:1274] "Unhandled Error" err=< Nov 25 12:02:30 crc kubenswrapper[4706]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Nov 25 12:02:30 crc kubenswrapper[4706]: - hosts: all Nov 25 12:02:30 crc kubenswrapper[4706]: strategy: linear Nov 25 12:02:30 crc kubenswrapper[4706]: tasks: Nov 25 12:02:30 crc kubenswrapper[4706]: - name: Enable podified-repos Nov 25 12:02:30 crc kubenswrapper[4706]: become: true Nov 25 12:02:30 crc kubenswrapper[4706]: ansible.builtin.shell: | Nov 25 12:02:30 crc kubenswrapper[4706]: set -euxo pipefail Nov 25 12:02:30 crc kubenswrapper[4706]: pushd /var/tmp Nov 25 12:02:30 crc kubenswrapper[4706]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Nov 25 12:02:30 crc kubenswrapper[4706]: pushd repo-setup-main Nov 25 12:02:30 crc kubenswrapper[4706]: python3 -m venv ./venv Nov 25 12:02:30 crc kubenswrapper[4706]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Nov 25 12:02:30 crc kubenswrapper[4706]: ./venv/bin/repo-setup current-podified -b antelope Nov 25 12:02:30 crc kubenswrapper[4706]: popd Nov 25 12:02:30 crc kubenswrapper[4706]: rm -rf repo-setup-main Nov 25 12:02:30 crc kubenswrapper[4706]: Nov 25 12:02:30 crc kubenswrapper[4706]: Nov 25 12:02:30 crc kubenswrapper[4706]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Nov 25 12:02:30 crc kubenswrapper[4706]: edpm_override_hosts: openstack-edpm-ipam Nov 25 12:02:30 crc kubenswrapper[4706]: edpm_service_type: repo-setup Nov 25 12:02:30 crc kubenswrapper[4706]: Nov 25 12:02:30 crc kubenswrapper[4706]: Nov 25 12:02:30 crc kubenswrapper[4706]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/runner/env/ssh_key,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bm9sm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw_openstack(e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Nov 25 12:02:30 crc kubenswrapper[4706]: > logger="UnhandledError" Nov 25 12:02:30 crc kubenswrapper[4706]: E1125 12:02:30.116424 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw" podUID="e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3" Nov 25 12:02:30 crc kubenswrapper[4706]: E1125 12:02:30.528711 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw" podUID="e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3" Nov 25 12:02:31 crc kubenswrapper[4706]: I1125 12:02:31.125741 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:02:31 crc kubenswrapper[4706]: I1125 12:02:31.126120 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:02:31 crc kubenswrapper[4706]: I1125 12:02:31.538726 4706 generic.go:334] "Generic (PLEG): container finished" podID="120d443a-be03-4d0e-a6a2-0d03ed708ba3" containerID="308b423ab01a467a7695d8132b2489ceda4c7d313a9d453e3da21a3958aa54b3" exitCode=0 Nov 25 12:02:31 crc kubenswrapper[4706]: I1125 12:02:31.538780 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ctxms" event={"ID":"120d443a-be03-4d0e-a6a2-0d03ed708ba3","Type":"ContainerDied","Data":"308b423ab01a467a7695d8132b2489ceda4c7d313a9d453e3da21a3958aa54b3"} Nov 25 12:02:32 crc kubenswrapper[4706]: I1125 12:02:32.554622 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ctxms" event={"ID":"120d443a-be03-4d0e-a6a2-0d03ed708ba3","Type":"ContainerStarted","Data":"5acfa0c340f8eb231770849a1e7788bc951e78ca83c0f917b7c4e72c4aa8e9f9"} Nov 25 12:02:34 crc kubenswrapper[4706]: I1125 12:02:34.154953 4706 scope.go:117] "RemoveContainer" containerID="208bf2801a5486d50ebfd06ece5a6213f8ea35ba740aa0f51f6b82f0ceae874c" Nov 25 12:02:34 crc kubenswrapper[4706]: I1125 12:02:34.183060 4706 scope.go:117] "RemoveContainer" containerID="50fc19dbc12030830b7f9abe1db59f12002a214f5583433dbe4de236c044a6f1" Nov 25 12:02:34 crc kubenswrapper[4706]: I1125 12:02:34.208759 4706 scope.go:117] "RemoveContainer" containerID="1cd5443cc641ed5ad034f2ef8a5282a873c09693bb609a311ea6ea3f1ace6bcf" Nov 25 12:02:34 crc kubenswrapper[4706]: I1125 12:02:34.245711 4706 scope.go:117] "RemoveContainer" containerID="c7ce584d8ee77b8e5b732e12afed33cfd07f39407d3ad1a3693457c0fa7f717e" Nov 25 12:02:34 crc kubenswrapper[4706]: I1125 12:02:34.271870 4706 scope.go:117] "RemoveContainer" containerID="1b8345c5537388476a73513d1ba19833895f18c5c970fba92ca16f8e77697522" Nov 25 12:02:35 crc kubenswrapper[4706]: I1125 12:02:35.676889 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z6ffp" Nov 25 12:02:35 crc kubenswrapper[4706]: I1125 12:02:35.705137 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ctxms" podStartSLOduration=6.586767214 podStartE2EDuration="39.705118029s" podCreationTimestamp="2025-11-25 12:01:56 +0000 UTC" firstStartedPulling="2025-11-25 12:01:59.145203579 +0000 UTC m=+1528.059760960" lastFinishedPulling="2025-11-25 12:02:32.263554394 +0000 UTC m=+1561.178111775" observedRunningTime="2025-11-25 12:02:32.580600564 +0000 UTC m=+1561.495157945" watchObservedRunningTime="2025-11-25 12:02:35.705118029 +0000 UTC m=+1564.619675410" Nov 25 12:02:35 crc kubenswrapper[4706]: I1125 12:02:35.729901 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z6ffp" Nov 25 12:02:35 crc kubenswrapper[4706]: I1125 12:02:35.916272 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z6ffp"] Nov 25 12:02:36 crc kubenswrapper[4706]: I1125 12:02:36.408151 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ctxms" Nov 25 12:02:36 crc kubenswrapper[4706]: I1125 12:02:36.408230 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ctxms" Nov 25 12:02:37 crc kubenswrapper[4706]: I1125 12:02:37.463206 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ctxms" podUID="120d443a-be03-4d0e-a6a2-0d03ed708ba3" containerName="registry-server" probeResult="failure" output=< Nov 25 12:02:37 crc kubenswrapper[4706]: timeout: failed to connect service ":50051" within 1s Nov 25 12:02:37 crc kubenswrapper[4706]: > Nov 25 12:02:37 crc kubenswrapper[4706]: I1125 12:02:37.598896 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z6ffp" podUID="47918fcd-d027-4db2-8964-4dbe4fb179f8" containerName="registry-server" containerID="cri-o://1ddb1f2935269e7da687b66a142394fbbfe8cb26a2ce2fcd3bee191734165951" gracePeriod=2 Nov 25 12:02:38 crc kubenswrapper[4706]: I1125 12:02:38.101661 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z6ffp" Nov 25 12:02:38 crc kubenswrapper[4706]: I1125 12:02:38.116954 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvj97\" (UniqueName: \"kubernetes.io/projected/47918fcd-d027-4db2-8964-4dbe4fb179f8-kube-api-access-hvj97\") pod \"47918fcd-d027-4db2-8964-4dbe4fb179f8\" (UID: \"47918fcd-d027-4db2-8964-4dbe4fb179f8\") " Nov 25 12:02:38 crc kubenswrapper[4706]: I1125 12:02:38.117039 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47918fcd-d027-4db2-8964-4dbe4fb179f8-utilities\") pod \"47918fcd-d027-4db2-8964-4dbe4fb179f8\" (UID: \"47918fcd-d027-4db2-8964-4dbe4fb179f8\") " Nov 25 12:02:38 crc kubenswrapper[4706]: I1125 12:02:38.117192 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47918fcd-d027-4db2-8964-4dbe4fb179f8-catalog-content\") pod \"47918fcd-d027-4db2-8964-4dbe4fb179f8\" (UID: \"47918fcd-d027-4db2-8964-4dbe4fb179f8\") " Nov 25 12:02:38 crc kubenswrapper[4706]: I1125 12:02:38.117907 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47918fcd-d027-4db2-8964-4dbe4fb179f8-utilities" (OuterVolumeSpecName: "utilities") pod "47918fcd-d027-4db2-8964-4dbe4fb179f8" (UID: "47918fcd-d027-4db2-8964-4dbe4fb179f8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:02:38 crc kubenswrapper[4706]: I1125 12:02:38.124876 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47918fcd-d027-4db2-8964-4dbe4fb179f8-kube-api-access-hvj97" (OuterVolumeSpecName: "kube-api-access-hvj97") pod "47918fcd-d027-4db2-8964-4dbe4fb179f8" (UID: "47918fcd-d027-4db2-8964-4dbe4fb179f8"). InnerVolumeSpecName "kube-api-access-hvj97". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:02:38 crc kubenswrapper[4706]: I1125 12:02:38.149507 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47918fcd-d027-4db2-8964-4dbe4fb179f8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "47918fcd-d027-4db2-8964-4dbe4fb179f8" (UID: "47918fcd-d027-4db2-8964-4dbe4fb179f8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:02:38 crc kubenswrapper[4706]: I1125 12:02:38.219076 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvj97\" (UniqueName: \"kubernetes.io/projected/47918fcd-d027-4db2-8964-4dbe4fb179f8-kube-api-access-hvj97\") on node \"crc\" DevicePath \"\"" Nov 25 12:02:38 crc kubenswrapper[4706]: I1125 12:02:38.219122 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47918fcd-d027-4db2-8964-4dbe4fb179f8-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:02:38 crc kubenswrapper[4706]: I1125 12:02:38.219131 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47918fcd-d027-4db2-8964-4dbe4fb179f8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:02:38 crc kubenswrapper[4706]: I1125 12:02:38.611063 4706 generic.go:334] "Generic (PLEG): container finished" podID="47918fcd-d027-4db2-8964-4dbe4fb179f8" containerID="1ddb1f2935269e7da687b66a142394fbbfe8cb26a2ce2fcd3bee191734165951" exitCode=0 Nov 25 12:02:38 crc kubenswrapper[4706]: I1125 12:02:38.611139 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z6ffp" Nov 25 12:02:38 crc kubenswrapper[4706]: I1125 12:02:38.611177 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z6ffp" event={"ID":"47918fcd-d027-4db2-8964-4dbe4fb179f8","Type":"ContainerDied","Data":"1ddb1f2935269e7da687b66a142394fbbfe8cb26a2ce2fcd3bee191734165951"} Nov 25 12:02:38 crc kubenswrapper[4706]: I1125 12:02:38.611505 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z6ffp" event={"ID":"47918fcd-d027-4db2-8964-4dbe4fb179f8","Type":"ContainerDied","Data":"d7cebda8839b392b872b21d2ceeb9137bb2a2b11379c0928a472c3558e3c669e"} Nov 25 12:02:38 crc kubenswrapper[4706]: I1125 12:02:38.611533 4706 scope.go:117] "RemoveContainer" containerID="1ddb1f2935269e7da687b66a142394fbbfe8cb26a2ce2fcd3bee191734165951" Nov 25 12:02:38 crc kubenswrapper[4706]: I1125 12:02:38.642021 4706 scope.go:117] "RemoveContainer" containerID="2460055270aa58f2ad90494b8f29c6ec2edce8b1a14f079acd28d448cdcf889c" Nov 25 12:02:38 crc kubenswrapper[4706]: I1125 12:02:38.648955 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z6ffp"] Nov 25 12:02:38 crc kubenswrapper[4706]: I1125 12:02:38.669059 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z6ffp"] Nov 25 12:02:38 crc kubenswrapper[4706]: I1125 12:02:38.675819 4706 scope.go:117] "RemoveContainer" containerID="0bbf10261049d2aa448733f4b32a4392f826fd43959440e30ade19fb45eaa927" Nov 25 12:02:38 crc kubenswrapper[4706]: I1125 12:02:38.706165 4706 scope.go:117] "RemoveContainer" containerID="1ddb1f2935269e7da687b66a142394fbbfe8cb26a2ce2fcd3bee191734165951" Nov 25 12:02:38 crc kubenswrapper[4706]: E1125 12:02:38.706587 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ddb1f2935269e7da687b66a142394fbbfe8cb26a2ce2fcd3bee191734165951\": container with ID starting with 1ddb1f2935269e7da687b66a142394fbbfe8cb26a2ce2fcd3bee191734165951 not found: ID does not exist" containerID="1ddb1f2935269e7da687b66a142394fbbfe8cb26a2ce2fcd3bee191734165951" Nov 25 12:02:38 crc kubenswrapper[4706]: I1125 12:02:38.706614 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ddb1f2935269e7da687b66a142394fbbfe8cb26a2ce2fcd3bee191734165951"} err="failed to get container status \"1ddb1f2935269e7da687b66a142394fbbfe8cb26a2ce2fcd3bee191734165951\": rpc error: code = NotFound desc = could not find container \"1ddb1f2935269e7da687b66a142394fbbfe8cb26a2ce2fcd3bee191734165951\": container with ID starting with 1ddb1f2935269e7da687b66a142394fbbfe8cb26a2ce2fcd3bee191734165951 not found: ID does not exist" Nov 25 12:02:38 crc kubenswrapper[4706]: I1125 12:02:38.706635 4706 scope.go:117] "RemoveContainer" containerID="2460055270aa58f2ad90494b8f29c6ec2edce8b1a14f079acd28d448cdcf889c" Nov 25 12:02:38 crc kubenswrapper[4706]: E1125 12:02:38.707128 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2460055270aa58f2ad90494b8f29c6ec2edce8b1a14f079acd28d448cdcf889c\": container with ID starting with 2460055270aa58f2ad90494b8f29c6ec2edce8b1a14f079acd28d448cdcf889c not found: ID does not exist" containerID="2460055270aa58f2ad90494b8f29c6ec2edce8b1a14f079acd28d448cdcf889c" Nov 25 12:02:38 crc kubenswrapper[4706]: I1125 12:02:38.707175 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2460055270aa58f2ad90494b8f29c6ec2edce8b1a14f079acd28d448cdcf889c"} err="failed to get container status \"2460055270aa58f2ad90494b8f29c6ec2edce8b1a14f079acd28d448cdcf889c\": rpc error: code = NotFound desc = could not find container \"2460055270aa58f2ad90494b8f29c6ec2edce8b1a14f079acd28d448cdcf889c\": container with ID starting with 2460055270aa58f2ad90494b8f29c6ec2edce8b1a14f079acd28d448cdcf889c not found: ID does not exist" Nov 25 12:02:38 crc kubenswrapper[4706]: I1125 12:02:38.707207 4706 scope.go:117] "RemoveContainer" containerID="0bbf10261049d2aa448733f4b32a4392f826fd43959440e30ade19fb45eaa927" Nov 25 12:02:38 crc kubenswrapper[4706]: E1125 12:02:38.707758 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bbf10261049d2aa448733f4b32a4392f826fd43959440e30ade19fb45eaa927\": container with ID starting with 0bbf10261049d2aa448733f4b32a4392f826fd43959440e30ade19fb45eaa927 not found: ID does not exist" containerID="0bbf10261049d2aa448733f4b32a4392f826fd43959440e30ade19fb45eaa927" Nov 25 12:02:38 crc kubenswrapper[4706]: I1125 12:02:38.707786 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bbf10261049d2aa448733f4b32a4392f826fd43959440e30ade19fb45eaa927"} err="failed to get container status \"0bbf10261049d2aa448733f4b32a4392f826fd43959440e30ade19fb45eaa927\": rpc error: code = NotFound desc = could not find container \"0bbf10261049d2aa448733f4b32a4392f826fd43959440e30ade19fb45eaa927\": container with ID starting with 0bbf10261049d2aa448733f4b32a4392f826fd43959440e30ade19fb45eaa927 not found: ID does not exist" Nov 25 12:02:38 crc kubenswrapper[4706]: E1125 12:02:38.814824 4706 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47918fcd_d027_4db2_8964_4dbe4fb179f8.slice/crio-d7cebda8839b392b872b21d2ceeb9137bb2a2b11379c0928a472c3558e3c669e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47918fcd_d027_4db2_8964_4dbe4fb179f8.slice\": RecentStats: unable to find data in memory cache]" Nov 25 12:02:39 crc kubenswrapper[4706]: I1125 12:02:39.934624 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47918fcd-d027-4db2-8964-4dbe4fb179f8" path="/var/lib/kubelet/pods/47918fcd-d027-4db2-8964-4dbe4fb179f8/volumes" Nov 25 12:02:41 crc kubenswrapper[4706]: I1125 12:02:41.371536 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:02:41 crc kubenswrapper[4706]: I1125 12:02:41.644111 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw" event={"ID":"e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3","Type":"ContainerStarted","Data":"cbc0602abc09131ee0519b2baedfee7645b211a5643c9ef3ca21598a42002499"} Nov 25 12:02:41 crc kubenswrapper[4706]: I1125 12:02:41.660425 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw" podStartSLOduration=1.698180059 podStartE2EDuration="37.660403477s" podCreationTimestamp="2025-11-25 12:02:04 +0000 UTC" firstStartedPulling="2025-11-25 12:02:05.407315479 +0000 UTC m=+1534.321872860" lastFinishedPulling="2025-11-25 12:02:41.369538897 +0000 UTC m=+1570.284096278" observedRunningTime="2025-11-25 12:02:41.660258343 +0000 UTC m=+1570.574815724" watchObservedRunningTime="2025-11-25 12:02:41.660403477 +0000 UTC m=+1570.574960858" Nov 25 12:02:46 crc kubenswrapper[4706]: I1125 12:02:46.464864 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ctxms" Nov 25 12:02:46 crc kubenswrapper[4706]: I1125 12:02:46.517003 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ctxms" Nov 25 12:02:46 crc kubenswrapper[4706]: I1125 12:02:46.702583 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ctxms"] Nov 25 12:02:47 crc kubenswrapper[4706]: I1125 12:02:47.699004 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ctxms" podUID="120d443a-be03-4d0e-a6a2-0d03ed708ba3" containerName="registry-server" containerID="cri-o://5acfa0c340f8eb231770849a1e7788bc951e78ca83c0f917b7c4e72c4aa8e9f9" gracePeriod=2 Nov 25 12:02:48 crc kubenswrapper[4706]: I1125 12:02:48.210881 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ctxms" Nov 25 12:02:48 crc kubenswrapper[4706]: I1125 12:02:48.317078 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/120d443a-be03-4d0e-a6a2-0d03ed708ba3-catalog-content\") pod \"120d443a-be03-4d0e-a6a2-0d03ed708ba3\" (UID: \"120d443a-be03-4d0e-a6a2-0d03ed708ba3\") " Nov 25 12:02:48 crc kubenswrapper[4706]: I1125 12:02:48.317279 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/120d443a-be03-4d0e-a6a2-0d03ed708ba3-utilities\") pod \"120d443a-be03-4d0e-a6a2-0d03ed708ba3\" (UID: \"120d443a-be03-4d0e-a6a2-0d03ed708ba3\") " Nov 25 12:02:48 crc kubenswrapper[4706]: I1125 12:02:48.317422 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89gvx\" (UniqueName: \"kubernetes.io/projected/120d443a-be03-4d0e-a6a2-0d03ed708ba3-kube-api-access-89gvx\") pod \"120d443a-be03-4d0e-a6a2-0d03ed708ba3\" (UID: \"120d443a-be03-4d0e-a6a2-0d03ed708ba3\") " Nov 25 12:02:48 crc kubenswrapper[4706]: I1125 12:02:48.318215 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/120d443a-be03-4d0e-a6a2-0d03ed708ba3-utilities" (OuterVolumeSpecName: "utilities") pod "120d443a-be03-4d0e-a6a2-0d03ed708ba3" (UID: "120d443a-be03-4d0e-a6a2-0d03ed708ba3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:02:48 crc kubenswrapper[4706]: I1125 12:02:48.319239 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/120d443a-be03-4d0e-a6a2-0d03ed708ba3-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:02:48 crc kubenswrapper[4706]: I1125 12:02:48.324981 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/120d443a-be03-4d0e-a6a2-0d03ed708ba3-kube-api-access-89gvx" (OuterVolumeSpecName: "kube-api-access-89gvx") pod "120d443a-be03-4d0e-a6a2-0d03ed708ba3" (UID: "120d443a-be03-4d0e-a6a2-0d03ed708ba3"). InnerVolumeSpecName "kube-api-access-89gvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:02:48 crc kubenswrapper[4706]: I1125 12:02:48.420757 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89gvx\" (UniqueName: \"kubernetes.io/projected/120d443a-be03-4d0e-a6a2-0d03ed708ba3-kube-api-access-89gvx\") on node \"crc\" DevicePath \"\"" Nov 25 12:02:48 crc kubenswrapper[4706]: I1125 12:02:48.423961 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/120d443a-be03-4d0e-a6a2-0d03ed708ba3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "120d443a-be03-4d0e-a6a2-0d03ed708ba3" (UID: "120d443a-be03-4d0e-a6a2-0d03ed708ba3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:02:48 crc kubenswrapper[4706]: I1125 12:02:48.521495 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/120d443a-be03-4d0e-a6a2-0d03ed708ba3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:02:48 crc kubenswrapper[4706]: I1125 12:02:48.714531 4706 generic.go:334] "Generic (PLEG): container finished" podID="120d443a-be03-4d0e-a6a2-0d03ed708ba3" containerID="5acfa0c340f8eb231770849a1e7788bc951e78ca83c0f917b7c4e72c4aa8e9f9" exitCode=0 Nov 25 12:02:48 crc kubenswrapper[4706]: I1125 12:02:48.714568 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ctxms" event={"ID":"120d443a-be03-4d0e-a6a2-0d03ed708ba3","Type":"ContainerDied","Data":"5acfa0c340f8eb231770849a1e7788bc951e78ca83c0f917b7c4e72c4aa8e9f9"} Nov 25 12:02:48 crc kubenswrapper[4706]: I1125 12:02:48.714582 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ctxms" Nov 25 12:02:48 crc kubenswrapper[4706]: I1125 12:02:48.714606 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ctxms" event={"ID":"120d443a-be03-4d0e-a6a2-0d03ed708ba3","Type":"ContainerDied","Data":"8780592c697b3d97c1cc7c2854d6503d2b604149109d0f3b0ce2339cfcd10c79"} Nov 25 12:02:48 crc kubenswrapper[4706]: I1125 12:02:48.714627 4706 scope.go:117] "RemoveContainer" containerID="5acfa0c340f8eb231770849a1e7788bc951e78ca83c0f917b7c4e72c4aa8e9f9" Nov 25 12:02:48 crc kubenswrapper[4706]: I1125 12:02:48.748569 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ctxms"] Nov 25 12:02:48 crc kubenswrapper[4706]: I1125 12:02:48.748756 4706 scope.go:117] "RemoveContainer" containerID="308b423ab01a467a7695d8132b2489ceda4c7d313a9d453e3da21a3958aa54b3" Nov 25 12:02:48 crc kubenswrapper[4706]: I1125 12:02:48.759670 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ctxms"] Nov 25 12:02:48 crc kubenswrapper[4706]: I1125 12:02:48.779613 4706 scope.go:117] "RemoveContainer" containerID="8afd3b132510fa408fe9743db721f11892a455cfef223930966f90271a44312e" Nov 25 12:02:48 crc kubenswrapper[4706]: I1125 12:02:48.825789 4706 scope.go:117] "RemoveContainer" containerID="5acfa0c340f8eb231770849a1e7788bc951e78ca83c0f917b7c4e72c4aa8e9f9" Nov 25 12:02:48 crc kubenswrapper[4706]: E1125 12:02:48.826680 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5acfa0c340f8eb231770849a1e7788bc951e78ca83c0f917b7c4e72c4aa8e9f9\": container with ID starting with 5acfa0c340f8eb231770849a1e7788bc951e78ca83c0f917b7c4e72c4aa8e9f9 not found: ID does not exist" containerID="5acfa0c340f8eb231770849a1e7788bc951e78ca83c0f917b7c4e72c4aa8e9f9" Nov 25 12:02:48 crc kubenswrapper[4706]: I1125 12:02:48.826720 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5acfa0c340f8eb231770849a1e7788bc951e78ca83c0f917b7c4e72c4aa8e9f9"} err="failed to get container status \"5acfa0c340f8eb231770849a1e7788bc951e78ca83c0f917b7c4e72c4aa8e9f9\": rpc error: code = NotFound desc = could not find container \"5acfa0c340f8eb231770849a1e7788bc951e78ca83c0f917b7c4e72c4aa8e9f9\": container with ID starting with 5acfa0c340f8eb231770849a1e7788bc951e78ca83c0f917b7c4e72c4aa8e9f9 not found: ID does not exist" Nov 25 12:02:48 crc kubenswrapper[4706]: I1125 12:02:48.826748 4706 scope.go:117] "RemoveContainer" containerID="308b423ab01a467a7695d8132b2489ceda4c7d313a9d453e3da21a3958aa54b3" Nov 25 12:02:48 crc kubenswrapper[4706]: E1125 12:02:48.827099 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"308b423ab01a467a7695d8132b2489ceda4c7d313a9d453e3da21a3958aa54b3\": container with ID starting with 308b423ab01a467a7695d8132b2489ceda4c7d313a9d453e3da21a3958aa54b3 not found: ID does not exist" containerID="308b423ab01a467a7695d8132b2489ceda4c7d313a9d453e3da21a3958aa54b3" Nov 25 12:02:48 crc kubenswrapper[4706]: I1125 12:02:48.827126 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"308b423ab01a467a7695d8132b2489ceda4c7d313a9d453e3da21a3958aa54b3"} err="failed to get container status \"308b423ab01a467a7695d8132b2489ceda4c7d313a9d453e3da21a3958aa54b3\": rpc error: code = NotFound desc = could not find container \"308b423ab01a467a7695d8132b2489ceda4c7d313a9d453e3da21a3958aa54b3\": container with ID starting with 308b423ab01a467a7695d8132b2489ceda4c7d313a9d453e3da21a3958aa54b3 not found: ID does not exist" Nov 25 12:02:48 crc kubenswrapper[4706]: I1125 12:02:48.827154 4706 scope.go:117] "RemoveContainer" containerID="8afd3b132510fa408fe9743db721f11892a455cfef223930966f90271a44312e" Nov 25 12:02:48 crc kubenswrapper[4706]: E1125 12:02:48.827472 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8afd3b132510fa408fe9743db721f11892a455cfef223930966f90271a44312e\": container with ID starting with 8afd3b132510fa408fe9743db721f11892a455cfef223930966f90271a44312e not found: ID does not exist" containerID="8afd3b132510fa408fe9743db721f11892a455cfef223930966f90271a44312e" Nov 25 12:02:48 crc kubenswrapper[4706]: I1125 12:02:48.827490 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8afd3b132510fa408fe9743db721f11892a455cfef223930966f90271a44312e"} err="failed to get container status \"8afd3b132510fa408fe9743db721f11892a455cfef223930966f90271a44312e\": rpc error: code = NotFound desc = could not find container \"8afd3b132510fa408fe9743db721f11892a455cfef223930966f90271a44312e\": container with ID starting with 8afd3b132510fa408fe9743db721f11892a455cfef223930966f90271a44312e not found: ID does not exist" Nov 25 12:02:49 crc kubenswrapper[4706]: I1125 12:02:49.933392 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="120d443a-be03-4d0e-a6a2-0d03ed708ba3" path="/var/lib/kubelet/pods/120d443a-be03-4d0e-a6a2-0d03ed708ba3/volumes" Nov 25 12:02:55 crc kubenswrapper[4706]: I1125 12:02:55.784985 4706 generic.go:334] "Generic (PLEG): container finished" podID="e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3" containerID="cbc0602abc09131ee0519b2baedfee7645b211a5643c9ef3ca21598a42002499" exitCode=0 Nov 25 12:02:55 crc kubenswrapper[4706]: I1125 12:02:55.785775 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw" event={"ID":"e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3","Type":"ContainerDied","Data":"cbc0602abc09131ee0519b2baedfee7645b211a5643c9ef3ca21598a42002499"} Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.237683 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw" Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.425609 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3-repo-setup-combined-ca-bundle\") pod \"e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3\" (UID: \"e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3\") " Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.425775 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3-ssh-key\") pod \"e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3\" (UID: \"e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3\") " Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.426508 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bm9sm\" (UniqueName: \"kubernetes.io/projected/e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3-kube-api-access-bm9sm\") pod \"e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3\" (UID: \"e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3\") " Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.426636 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3-inventory\") pod \"e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3\" (UID: \"e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3\") " Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.431756 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3-kube-api-access-bm9sm" (OuterVolumeSpecName: "kube-api-access-bm9sm") pod "e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3" (UID: "e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3"). InnerVolumeSpecName "kube-api-access-bm9sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.431862 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3" (UID: "e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.459573 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3" (UID: "e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.464050 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3-inventory" (OuterVolumeSpecName: "inventory") pod "e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3" (UID: "e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.530654 4706 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.530782 4706 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.530801 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bm9sm\" (UniqueName: \"kubernetes.io/projected/e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3-kube-api-access-bm9sm\") on node \"crc\" DevicePath \"\"" Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.530817 4706 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.805757 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw" event={"ID":"e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3","Type":"ContainerDied","Data":"78766f1c953168bcb9ea3c686c28d134456bf3fd455bf61a45bb524453e7ff4a"} Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.805798 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78766f1c953168bcb9ea3c686c28d134456bf3fd455bf61a45bb524453e7ff4a" Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.805817 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw" Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.968387 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-qn78f"] Nov 25 12:02:57 crc kubenswrapper[4706]: E1125 12:02:57.968923 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ee6af69-6304-4a7f-bfae-9e73272ce951" containerName="extract-utilities" Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.968944 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ee6af69-6304-4a7f-bfae-9e73272ce951" containerName="extract-utilities" Nov 25 12:02:57 crc kubenswrapper[4706]: E1125 12:02:57.968958 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47918fcd-d027-4db2-8964-4dbe4fb179f8" containerName="extract-content" Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.968965 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="47918fcd-d027-4db2-8964-4dbe4fb179f8" containerName="extract-content" Nov 25 12:02:57 crc kubenswrapper[4706]: E1125 12:02:57.968985 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="120d443a-be03-4d0e-a6a2-0d03ed708ba3" containerName="registry-server" Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.968993 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="120d443a-be03-4d0e-a6a2-0d03ed708ba3" containerName="registry-server" Nov 25 12:02:57 crc kubenswrapper[4706]: E1125 12:02:57.969016 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="272a2de0-ac52-46e5-aa78-569b642ad4bb" containerName="extract-content" Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.969024 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="272a2de0-ac52-46e5-aa78-569b642ad4bb" containerName="extract-content" Nov 25 12:02:57 crc kubenswrapper[4706]: E1125 12:02:57.969049 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="120d443a-be03-4d0e-a6a2-0d03ed708ba3" containerName="extract-content" Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.969057 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="120d443a-be03-4d0e-a6a2-0d03ed708ba3" containerName="extract-content" Nov 25 12:02:57 crc kubenswrapper[4706]: E1125 12:02:57.969071 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.969097 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 12:02:57 crc kubenswrapper[4706]: E1125 12:02:57.969120 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="272a2de0-ac52-46e5-aa78-569b642ad4bb" containerName="extract-utilities" Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.969128 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="272a2de0-ac52-46e5-aa78-569b642ad4bb" containerName="extract-utilities" Nov 25 12:02:57 crc kubenswrapper[4706]: E1125 12:02:57.969143 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="272a2de0-ac52-46e5-aa78-569b642ad4bb" containerName="registry-server" Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.969149 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="272a2de0-ac52-46e5-aa78-569b642ad4bb" containerName="registry-server" Nov 25 12:02:57 crc kubenswrapper[4706]: E1125 12:02:57.969162 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="120d443a-be03-4d0e-a6a2-0d03ed708ba3" containerName="extract-utilities" Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.969168 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="120d443a-be03-4d0e-a6a2-0d03ed708ba3" containerName="extract-utilities" Nov 25 12:02:57 crc kubenswrapper[4706]: E1125 12:02:57.969176 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ee6af69-6304-4a7f-bfae-9e73272ce951" containerName="extract-content" Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.969183 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ee6af69-6304-4a7f-bfae-9e73272ce951" containerName="extract-content" Nov 25 12:02:57 crc kubenswrapper[4706]: E1125 12:02:57.969191 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47918fcd-d027-4db2-8964-4dbe4fb179f8" containerName="registry-server" Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.969197 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="47918fcd-d027-4db2-8964-4dbe4fb179f8" containerName="registry-server" Nov 25 12:02:57 crc kubenswrapper[4706]: E1125 12:02:57.969216 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ee6af69-6304-4a7f-bfae-9e73272ce951" containerName="registry-server" Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.969222 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ee6af69-6304-4a7f-bfae-9e73272ce951" containerName="registry-server" Nov 25 12:02:57 crc kubenswrapper[4706]: E1125 12:02:57.969232 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47918fcd-d027-4db2-8964-4dbe4fb179f8" containerName="extract-utilities" Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.969238 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="47918fcd-d027-4db2-8964-4dbe4fb179f8" containerName="extract-utilities" Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.969441 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ee6af69-6304-4a7f-bfae-9e73272ce951" containerName="registry-server" Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.969455 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="272a2de0-ac52-46e5-aa78-569b642ad4bb" containerName="registry-server" Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.969463 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="47918fcd-d027-4db2-8964-4dbe4fb179f8" containerName="registry-server" Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.969474 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.969492 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="120d443a-be03-4d0e-a6a2-0d03ed708ba3" containerName="registry-server" Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.970218 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qn78f" Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.972647 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r8qqp" Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.972660 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.972813 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.972842 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 12:02:57 crc kubenswrapper[4706]: I1125 12:02:57.981432 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-qn78f"] Nov 25 12:02:58 crc kubenswrapper[4706]: I1125 12:02:58.038615 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b86d7293-ea09-42c5-948d-27c51a31d886-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qn78f\" (UID: \"b86d7293-ea09-42c5-948d-27c51a31d886\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qn78f" Nov 25 12:02:58 crc kubenswrapper[4706]: I1125 12:02:58.038694 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljzqc\" (UniqueName: \"kubernetes.io/projected/b86d7293-ea09-42c5-948d-27c51a31d886-kube-api-access-ljzqc\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qn78f\" (UID: \"b86d7293-ea09-42c5-948d-27c51a31d886\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qn78f" Nov 25 12:02:58 crc kubenswrapper[4706]: I1125 12:02:58.038728 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b86d7293-ea09-42c5-948d-27c51a31d886-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qn78f\" (UID: \"b86d7293-ea09-42c5-948d-27c51a31d886\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qn78f" Nov 25 12:02:58 crc kubenswrapper[4706]: I1125 12:02:58.140785 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b86d7293-ea09-42c5-948d-27c51a31d886-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qn78f\" (UID: \"b86d7293-ea09-42c5-948d-27c51a31d886\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qn78f" Nov 25 12:02:58 crc kubenswrapper[4706]: I1125 12:02:58.140886 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljzqc\" (UniqueName: \"kubernetes.io/projected/b86d7293-ea09-42c5-948d-27c51a31d886-kube-api-access-ljzqc\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qn78f\" (UID: \"b86d7293-ea09-42c5-948d-27c51a31d886\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qn78f" Nov 25 12:02:58 crc kubenswrapper[4706]: I1125 12:02:58.140916 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b86d7293-ea09-42c5-948d-27c51a31d886-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qn78f\" (UID: \"b86d7293-ea09-42c5-948d-27c51a31d886\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qn78f" Nov 25 12:02:58 crc kubenswrapper[4706]: I1125 12:02:58.144622 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b86d7293-ea09-42c5-948d-27c51a31d886-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qn78f\" (UID: \"b86d7293-ea09-42c5-948d-27c51a31d886\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qn78f" Nov 25 12:02:58 crc kubenswrapper[4706]: I1125 12:02:58.145023 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b86d7293-ea09-42c5-948d-27c51a31d886-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qn78f\" (UID: \"b86d7293-ea09-42c5-948d-27c51a31d886\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qn78f" Nov 25 12:02:58 crc kubenswrapper[4706]: I1125 12:02:58.157718 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljzqc\" (UniqueName: \"kubernetes.io/projected/b86d7293-ea09-42c5-948d-27c51a31d886-kube-api-access-ljzqc\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qn78f\" (UID: \"b86d7293-ea09-42c5-948d-27c51a31d886\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qn78f" Nov 25 12:02:58 crc kubenswrapper[4706]: I1125 12:02:58.288832 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qn78f" Nov 25 12:02:58 crc kubenswrapper[4706]: I1125 12:02:58.799813 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-qn78f"] Nov 25 12:02:58 crc kubenswrapper[4706]: I1125 12:02:58.818724 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qn78f" event={"ID":"b86d7293-ea09-42c5-948d-27c51a31d886","Type":"ContainerStarted","Data":"b7398e76f96885c7f4bd98770bf98e36b4a68b066cfee180855bc9104a62ec2c"} Nov 25 12:02:59 crc kubenswrapper[4706]: I1125 12:02:59.836005 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qn78f" event={"ID":"b86d7293-ea09-42c5-948d-27c51a31d886","Type":"ContainerStarted","Data":"d9615374d6a40ae58c6616730c38335411eccabedd22c4463dd54bf8f4ca410c"} Nov 25 12:02:59 crc kubenswrapper[4706]: I1125 12:02:59.859731 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qn78f" podStartSLOduration=2.402171125 podStartE2EDuration="2.859710994s" podCreationTimestamp="2025-11-25 12:02:57 +0000 UTC" firstStartedPulling="2025-11-25 12:02:58.804351711 +0000 UTC m=+1587.718909092" lastFinishedPulling="2025-11-25 12:02:59.26189158 +0000 UTC m=+1588.176448961" observedRunningTime="2025-11-25 12:02:59.854317939 +0000 UTC m=+1588.768875330" watchObservedRunningTime="2025-11-25 12:02:59.859710994 +0000 UTC m=+1588.774268375" Nov 25 12:03:01 crc kubenswrapper[4706]: I1125 12:03:01.125089 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:03:01 crc kubenswrapper[4706]: I1125 12:03:01.125461 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:03:01 crc kubenswrapper[4706]: I1125 12:03:01.125513 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" Nov 25 12:03:01 crc kubenswrapper[4706]: I1125 12:03:01.126279 4706 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0a0bdee99cfe03b615e21edca20e8cd5d2aec43e4e69d2e5c17d3666e93d6426"} pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 12:03:01 crc kubenswrapper[4706]: I1125 12:03:01.126359 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" containerID="cri-o://0a0bdee99cfe03b615e21edca20e8cd5d2aec43e4e69d2e5c17d3666e93d6426" gracePeriod=600 Nov 25 12:03:01 crc kubenswrapper[4706]: E1125 12:03:01.769082 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:03:01 crc kubenswrapper[4706]: I1125 12:03:01.858491 4706 generic.go:334] "Generic (PLEG): container finished" podID="0930887a-320c-4506-8c9c-f94d6d64516a" containerID="0a0bdee99cfe03b615e21edca20e8cd5d2aec43e4e69d2e5c17d3666e93d6426" exitCode=0 Nov 25 12:03:01 crc kubenswrapper[4706]: I1125 12:03:01.858538 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" event={"ID":"0930887a-320c-4506-8c9c-f94d6d64516a","Type":"ContainerDied","Data":"0a0bdee99cfe03b615e21edca20e8cd5d2aec43e4e69d2e5c17d3666e93d6426"} Nov 25 12:03:01 crc kubenswrapper[4706]: I1125 12:03:01.858578 4706 scope.go:117] "RemoveContainer" containerID="f685f0473c39af27d83f9b8acef23bb16392c6964cab02224e6cb60acc8e8ad1" Nov 25 12:03:01 crc kubenswrapper[4706]: I1125 12:03:01.859388 4706 scope.go:117] "RemoveContainer" containerID="0a0bdee99cfe03b615e21edca20e8cd5d2aec43e4e69d2e5c17d3666e93d6426" Nov 25 12:03:01 crc kubenswrapper[4706]: E1125 12:03:01.859823 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:03:02 crc kubenswrapper[4706]: I1125 12:03:02.870208 4706 generic.go:334] "Generic (PLEG): container finished" podID="b86d7293-ea09-42c5-948d-27c51a31d886" containerID="d9615374d6a40ae58c6616730c38335411eccabedd22c4463dd54bf8f4ca410c" exitCode=0 Nov 25 12:03:02 crc kubenswrapper[4706]: I1125 12:03:02.870294 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qn78f" event={"ID":"b86d7293-ea09-42c5-948d-27c51a31d886","Type":"ContainerDied","Data":"d9615374d6a40ae58c6616730c38335411eccabedd22c4463dd54bf8f4ca410c"} Nov 25 12:03:04 crc kubenswrapper[4706]: I1125 12:03:04.268091 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qn78f" Nov 25 12:03:04 crc kubenswrapper[4706]: I1125 12:03:04.464480 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljzqc\" (UniqueName: \"kubernetes.io/projected/b86d7293-ea09-42c5-948d-27c51a31d886-kube-api-access-ljzqc\") pod \"b86d7293-ea09-42c5-948d-27c51a31d886\" (UID: \"b86d7293-ea09-42c5-948d-27c51a31d886\") " Nov 25 12:03:04 crc kubenswrapper[4706]: I1125 12:03:04.465696 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b86d7293-ea09-42c5-948d-27c51a31d886-inventory\") pod \"b86d7293-ea09-42c5-948d-27c51a31d886\" (UID: \"b86d7293-ea09-42c5-948d-27c51a31d886\") " Nov 25 12:03:04 crc kubenswrapper[4706]: I1125 12:03:04.465940 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b86d7293-ea09-42c5-948d-27c51a31d886-ssh-key\") pod \"b86d7293-ea09-42c5-948d-27c51a31d886\" (UID: \"b86d7293-ea09-42c5-948d-27c51a31d886\") " Nov 25 12:03:04 crc kubenswrapper[4706]: I1125 12:03:04.470506 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b86d7293-ea09-42c5-948d-27c51a31d886-kube-api-access-ljzqc" (OuterVolumeSpecName: "kube-api-access-ljzqc") pod "b86d7293-ea09-42c5-948d-27c51a31d886" (UID: "b86d7293-ea09-42c5-948d-27c51a31d886"). InnerVolumeSpecName "kube-api-access-ljzqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:03:04 crc kubenswrapper[4706]: I1125 12:03:04.493878 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b86d7293-ea09-42c5-948d-27c51a31d886-inventory" (OuterVolumeSpecName: "inventory") pod "b86d7293-ea09-42c5-948d-27c51a31d886" (UID: "b86d7293-ea09-42c5-948d-27c51a31d886"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:03:04 crc kubenswrapper[4706]: I1125 12:03:04.499580 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b86d7293-ea09-42c5-948d-27c51a31d886-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b86d7293-ea09-42c5-948d-27c51a31d886" (UID: "b86d7293-ea09-42c5-948d-27c51a31d886"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:03:04 crc kubenswrapper[4706]: I1125 12:03:04.569212 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljzqc\" (UniqueName: \"kubernetes.io/projected/b86d7293-ea09-42c5-948d-27c51a31d886-kube-api-access-ljzqc\") on node \"crc\" DevicePath \"\"" Nov 25 12:03:04 crc kubenswrapper[4706]: I1125 12:03:04.569271 4706 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b86d7293-ea09-42c5-948d-27c51a31d886-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 12:03:04 crc kubenswrapper[4706]: I1125 12:03:04.569284 4706 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b86d7293-ea09-42c5-948d-27c51a31d886-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 12:03:04 crc kubenswrapper[4706]: I1125 12:03:04.894622 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qn78f" event={"ID":"b86d7293-ea09-42c5-948d-27c51a31d886","Type":"ContainerDied","Data":"b7398e76f96885c7f4bd98770bf98e36b4a68b066cfee180855bc9104a62ec2c"} Nov 25 12:03:04 crc kubenswrapper[4706]: I1125 12:03:04.894895 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7398e76f96885c7f4bd98770bf98e36b4a68b066cfee180855bc9104a62ec2c" Nov 25 12:03:04 crc kubenswrapper[4706]: I1125 12:03:04.894664 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qn78f" Nov 25 12:03:04 crc kubenswrapper[4706]: I1125 12:03:04.963438 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r"] Nov 25 12:03:04 crc kubenswrapper[4706]: E1125 12:03:04.963947 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b86d7293-ea09-42c5-948d-27c51a31d886" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 25 12:03:04 crc kubenswrapper[4706]: I1125 12:03:04.963972 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="b86d7293-ea09-42c5-948d-27c51a31d886" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 25 12:03:04 crc kubenswrapper[4706]: I1125 12:03:04.964242 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="b86d7293-ea09-42c5-948d-27c51a31d886" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 25 12:03:04 crc kubenswrapper[4706]: I1125 12:03:04.965106 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r" Nov 25 12:03:05 crc kubenswrapper[4706]: I1125 12:03:04.974563 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r"] Nov 25 12:03:05 crc kubenswrapper[4706]: I1125 12:03:05.017923 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 12:03:05 crc kubenswrapper[4706]: I1125 12:03:05.018210 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 12:03:05 crc kubenswrapper[4706]: I1125 12:03:05.018416 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r8qqp" Nov 25 12:03:05 crc kubenswrapper[4706]: I1125 12:03:05.018582 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:03:05 crc kubenswrapper[4706]: I1125 12:03:05.078052 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50dff0a2-b50d-43ee-8951-e49958b3cd5a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r\" (UID: \"50dff0a2-b50d-43ee-8951-e49958b3cd5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r" Nov 25 12:03:05 crc kubenswrapper[4706]: I1125 12:03:05.078129 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/50dff0a2-b50d-43ee-8951-e49958b3cd5a-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r\" (UID: \"50dff0a2-b50d-43ee-8951-e49958b3cd5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r" Nov 25 12:03:05 crc kubenswrapper[4706]: I1125 12:03:05.078192 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50dff0a2-b50d-43ee-8951-e49958b3cd5a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r\" (UID: \"50dff0a2-b50d-43ee-8951-e49958b3cd5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r" Nov 25 12:03:05 crc kubenswrapper[4706]: I1125 12:03:05.078269 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz4zk\" (UniqueName: \"kubernetes.io/projected/50dff0a2-b50d-43ee-8951-e49958b3cd5a-kube-api-access-bz4zk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r\" (UID: \"50dff0a2-b50d-43ee-8951-e49958b3cd5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r" Nov 25 12:03:05 crc kubenswrapper[4706]: I1125 12:03:05.180479 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50dff0a2-b50d-43ee-8951-e49958b3cd5a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r\" (UID: \"50dff0a2-b50d-43ee-8951-e49958b3cd5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r" Nov 25 12:03:05 crc kubenswrapper[4706]: I1125 12:03:05.180874 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/50dff0a2-b50d-43ee-8951-e49958b3cd5a-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r\" (UID: \"50dff0a2-b50d-43ee-8951-e49958b3cd5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r" Nov 25 12:03:05 crc kubenswrapper[4706]: I1125 12:03:05.181035 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50dff0a2-b50d-43ee-8951-e49958b3cd5a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r\" (UID: \"50dff0a2-b50d-43ee-8951-e49958b3cd5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r" Nov 25 12:03:05 crc kubenswrapper[4706]: I1125 12:03:05.181173 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz4zk\" (UniqueName: \"kubernetes.io/projected/50dff0a2-b50d-43ee-8951-e49958b3cd5a-kube-api-access-bz4zk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r\" (UID: \"50dff0a2-b50d-43ee-8951-e49958b3cd5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r" Nov 25 12:03:05 crc kubenswrapper[4706]: I1125 12:03:05.185345 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/50dff0a2-b50d-43ee-8951-e49958b3cd5a-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r\" (UID: \"50dff0a2-b50d-43ee-8951-e49958b3cd5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r" Nov 25 12:03:05 crc kubenswrapper[4706]: I1125 12:03:05.186239 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50dff0a2-b50d-43ee-8951-e49958b3cd5a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r\" (UID: \"50dff0a2-b50d-43ee-8951-e49958b3cd5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r" Nov 25 12:03:05 crc kubenswrapper[4706]: I1125 12:03:05.186776 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50dff0a2-b50d-43ee-8951-e49958b3cd5a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r\" (UID: \"50dff0a2-b50d-43ee-8951-e49958b3cd5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r" Nov 25 12:03:05 crc kubenswrapper[4706]: I1125 12:03:05.197756 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz4zk\" (UniqueName: \"kubernetes.io/projected/50dff0a2-b50d-43ee-8951-e49958b3cd5a-kube-api-access-bz4zk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r\" (UID: \"50dff0a2-b50d-43ee-8951-e49958b3cd5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r" Nov 25 12:03:05 crc kubenswrapper[4706]: I1125 12:03:05.331142 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r" Nov 25 12:03:05 crc kubenswrapper[4706]: I1125 12:03:05.844922 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r"] Nov 25 12:03:05 crc kubenswrapper[4706]: I1125 12:03:05.905100 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r" event={"ID":"50dff0a2-b50d-43ee-8951-e49958b3cd5a","Type":"ContainerStarted","Data":"42affd4c74b9374c629871fb5ba5eb45a5822ea77c8aabbe559af6d110fd680a"} Nov 25 12:03:07 crc kubenswrapper[4706]: I1125 12:03:07.954118 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r" event={"ID":"50dff0a2-b50d-43ee-8951-e49958b3cd5a","Type":"ContainerStarted","Data":"fbbcbf45f6e03ca44e598d7f255a31132d74782780346e4589288ab7db7b3bf4"} Nov 25 12:03:07 crc kubenswrapper[4706]: I1125 12:03:07.982382 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r" podStartSLOduration=2.949060911 podStartE2EDuration="3.982360769s" podCreationTimestamp="2025-11-25 12:03:04 +0000 UTC" firstStartedPulling="2025-11-25 12:03:05.852679363 +0000 UTC m=+1594.767236744" lastFinishedPulling="2025-11-25 12:03:06.885979221 +0000 UTC m=+1595.800536602" observedRunningTime="2025-11-25 12:03:07.970820309 +0000 UTC m=+1596.885377700" watchObservedRunningTime="2025-11-25 12:03:07.982360769 +0000 UTC m=+1596.896918150" Nov 25 12:03:14 crc kubenswrapper[4706]: I1125 12:03:14.923249 4706 scope.go:117] "RemoveContainer" containerID="0a0bdee99cfe03b615e21edca20e8cd5d2aec43e4e69d2e5c17d3666e93d6426" Nov 25 12:03:14 crc kubenswrapper[4706]: E1125 12:03:14.925258 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:03:28 crc kubenswrapper[4706]: I1125 12:03:28.922507 4706 scope.go:117] "RemoveContainer" containerID="0a0bdee99cfe03b615e21edca20e8cd5d2aec43e4e69d2e5c17d3666e93d6426" Nov 25 12:03:28 crc kubenswrapper[4706]: E1125 12:03:28.923274 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:03:40 crc kubenswrapper[4706]: I1125 12:03:40.922872 4706 scope.go:117] "RemoveContainer" containerID="0a0bdee99cfe03b615e21edca20e8cd5d2aec43e4e69d2e5c17d3666e93d6426" Nov 25 12:03:40 crc kubenswrapper[4706]: E1125 12:03:40.923637 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:03:51 crc kubenswrapper[4706]: I1125 12:03:51.929098 4706 scope.go:117] "RemoveContainer" containerID="0a0bdee99cfe03b615e21edca20e8cd5d2aec43e4e69d2e5c17d3666e93d6426" Nov 25 12:03:51 crc kubenswrapper[4706]: E1125 12:03:51.929934 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:04:06 crc kubenswrapper[4706]: I1125 12:04:06.922452 4706 scope.go:117] "RemoveContainer" containerID="0a0bdee99cfe03b615e21edca20e8cd5d2aec43e4e69d2e5c17d3666e93d6426" Nov 25 12:04:06 crc kubenswrapper[4706]: E1125 12:04:06.923424 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:04:20 crc kubenswrapper[4706]: I1125 12:04:20.923344 4706 scope.go:117] "RemoveContainer" containerID="0a0bdee99cfe03b615e21edca20e8cd5d2aec43e4e69d2e5c17d3666e93d6426" Nov 25 12:04:20 crc kubenswrapper[4706]: E1125 12:04:20.924222 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:04:31 crc kubenswrapper[4706]: I1125 12:04:31.069934 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-bnr25"] Nov 25 12:04:31 crc kubenswrapper[4706]: I1125 12:04:31.084235 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-96a5-account-create-54vg5"] Nov 25 12:04:31 crc kubenswrapper[4706]: I1125 12:04:31.095906 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-96a5-account-create-54vg5"] Nov 25 12:04:31 crc kubenswrapper[4706]: I1125 12:04:31.107158 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-bnr25"] Nov 25 12:04:31 crc kubenswrapper[4706]: I1125 12:04:31.933820 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bb8bf03-9489-462f-a011-ce81bd934976" path="/var/lib/kubelet/pods/8bb8bf03-9489-462f-a011-ce81bd934976/volumes" Nov 25 12:04:31 crc kubenswrapper[4706]: I1125 12:04:31.935639 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d37c10-6fec-486b-9c0f-f28772cdd96a" path="/var/lib/kubelet/pods/e1d37c10-6fec-486b-9c0f-f28772cdd96a/volumes" Nov 25 12:04:34 crc kubenswrapper[4706]: I1125 12:04:34.029806 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-745c-account-create-khc42"] Nov 25 12:04:34 crc kubenswrapper[4706]: I1125 12:04:34.042562 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-745c-account-create-khc42"] Nov 25 12:04:34 crc kubenswrapper[4706]: I1125 12:04:34.558044 4706 scope.go:117] "RemoveContainer" containerID="b95f19efba2e9dc7131a123c020f52012e88bdeb845402fde55128529af192eb" Nov 25 12:04:34 crc kubenswrapper[4706]: I1125 12:04:34.601133 4706 scope.go:117] "RemoveContainer" containerID="c35ab05f881ad0005a2d6220cbb9ca002f4a2ef06da1ea0655e5b2f8eece3db4" Nov 25 12:04:34 crc kubenswrapper[4706]: I1125 12:04:34.657866 4706 scope.go:117] "RemoveContainer" containerID="99efbe8098bf623b67e50576be8330f62829843e5249a1ba174bb70397214b69" Nov 25 12:04:34 crc kubenswrapper[4706]: I1125 12:04:34.708954 4706 scope.go:117] "RemoveContainer" containerID="d768e616411dcdb6bd2fc471582c1976a7fac18d1247eba3676c8623b8d1ec65" Nov 25 12:04:34 crc kubenswrapper[4706]: I1125 12:04:34.730851 4706 scope.go:117] "RemoveContainer" containerID="7fcc2fade0cfd4ac61dc8eb95debe757d544a2b64a5ccc888c4bec81573ba0bc" Nov 25 12:04:35 crc kubenswrapper[4706]: I1125 12:04:35.035004 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-0708-account-create-vmb99"] Nov 25 12:04:35 crc kubenswrapper[4706]: I1125 12:04:35.053740 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-fvcgj"] Nov 25 12:04:35 crc kubenswrapper[4706]: I1125 12:04:35.063293 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-mjpth"] Nov 25 12:04:35 crc kubenswrapper[4706]: I1125 12:04:35.072108 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-0708-account-create-vmb99"] Nov 25 12:04:35 crc kubenswrapper[4706]: I1125 12:04:35.080891 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-fvcgj"] Nov 25 12:04:35 crc kubenswrapper[4706]: I1125 12:04:35.091178 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-mjpth"] Nov 25 12:04:35 crc kubenswrapper[4706]: I1125 12:04:35.923286 4706 scope.go:117] "RemoveContainer" containerID="0a0bdee99cfe03b615e21edca20e8cd5d2aec43e4e69d2e5c17d3666e93d6426" Nov 25 12:04:35 crc kubenswrapper[4706]: E1125 12:04:35.923869 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:04:35 crc kubenswrapper[4706]: I1125 12:04:35.933196 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="244a9875-4efd-40a6-8f29-745b385b516d" path="/var/lib/kubelet/pods/244a9875-4efd-40a6-8f29-745b385b516d/volumes" Nov 25 12:04:35 crc kubenswrapper[4706]: I1125 12:04:35.934072 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="696b1c53-9d80-42b1-bc7d-4699620c019a" path="/var/lib/kubelet/pods/696b1c53-9d80-42b1-bc7d-4699620c019a/volumes" Nov 25 12:04:35 crc kubenswrapper[4706]: I1125 12:04:35.934598 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4f78f8e-f722-4335-8421-35d52edc3181" path="/var/lib/kubelet/pods/a4f78f8e-f722-4335-8421-35d52edc3181/volumes" Nov 25 12:04:35 crc kubenswrapper[4706]: I1125 12:04:35.935082 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6560bf6-0b62-465f-b3ef-f762b5eac76a" path="/var/lib/kubelet/pods/a6560bf6-0b62-465f-b3ef-f762b5eac76a/volumes" Nov 25 12:04:49 crc kubenswrapper[4706]: I1125 12:04:49.923107 4706 scope.go:117] "RemoveContainer" containerID="0a0bdee99cfe03b615e21edca20e8cd5d2aec43e4e69d2e5c17d3666e93d6426" Nov 25 12:04:49 crc kubenswrapper[4706]: E1125 12:04:49.924117 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:04:58 crc kubenswrapper[4706]: I1125 12:04:58.041755 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-v7ftf"] Nov 25 12:04:58 crc kubenswrapper[4706]: I1125 12:04:58.053105 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-v7ftf"] Nov 25 12:04:59 crc kubenswrapper[4706]: I1125 12:04:59.935582 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3c43e2c-68e2-4f5d-8c64-c9028a967f7f" path="/var/lib/kubelet/pods/a3c43e2c-68e2-4f5d-8c64-c9028a967f7f/volumes" Nov 25 12:05:02 crc kubenswrapper[4706]: I1125 12:05:02.037582 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-rs7pp"] Nov 25 12:05:02 crc kubenswrapper[4706]: I1125 12:05:02.047662 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-7ad8-account-create-vg4bf"] Nov 25 12:05:02 crc kubenswrapper[4706]: I1125 12:05:02.056614 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-7lvvv"] Nov 25 12:05:02 crc kubenswrapper[4706]: I1125 12:05:02.065293 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-rs7pp"] Nov 25 12:05:02 crc kubenswrapper[4706]: I1125 12:05:02.074256 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-7lvvv"] Nov 25 12:05:02 crc kubenswrapper[4706]: I1125 12:05:02.083843 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-7ad8-account-create-vg4bf"] Nov 25 12:05:03 crc kubenswrapper[4706]: I1125 12:05:03.046659 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-hncd9"] Nov 25 12:05:03 crc kubenswrapper[4706]: I1125 12:05:03.060884 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-30a4-account-create-wpgb6"] Nov 25 12:05:03 crc kubenswrapper[4706]: I1125 12:05:03.069948 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-30a4-account-create-wpgb6"] Nov 25 12:05:03 crc kubenswrapper[4706]: I1125 12:05:03.078258 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-hncd9"] Nov 25 12:05:03 crc kubenswrapper[4706]: I1125 12:05:03.922156 4706 scope.go:117] "RemoveContainer" containerID="0a0bdee99cfe03b615e21edca20e8cd5d2aec43e4e69d2e5c17d3666e93d6426" Nov 25 12:05:03 crc kubenswrapper[4706]: E1125 12:05:03.922596 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:05:03 crc kubenswrapper[4706]: I1125 12:05:03.935981 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="054fda50-c263-45c4-9bde-2fc9d81c57b1" path="/var/lib/kubelet/pods/054fda50-c263-45c4-9bde-2fc9d81c57b1/volumes" Nov 25 12:05:03 crc kubenswrapper[4706]: I1125 12:05:03.937244 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2048b4c8-b4e2-4961-992e-4ab7104ca1d3" path="/var/lib/kubelet/pods/2048b4c8-b4e2-4961-992e-4ab7104ca1d3/volumes" Nov 25 12:05:03 crc kubenswrapper[4706]: I1125 12:05:03.938461 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c2d1155-3724-4c94-a5fb-fcf88b53064e" path="/var/lib/kubelet/pods/4c2d1155-3724-4c94-a5fb-fcf88b53064e/volumes" Nov 25 12:05:03 crc kubenswrapper[4706]: I1125 12:05:03.939822 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="562f2b9a-0768-4613-9711-8df28886eb32" path="/var/lib/kubelet/pods/562f2b9a-0768-4613-9711-8df28886eb32/volumes" Nov 25 12:05:03 crc kubenswrapper[4706]: I1125 12:05:03.941840 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3b54223-dba3-409f-a6dc-fc371e46ab31" path="/var/lib/kubelet/pods/a3b54223-dba3-409f-a6dc-fc371e46ab31/volumes" Nov 25 12:05:13 crc kubenswrapper[4706]: I1125 12:05:13.034011 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d4d1-account-create-lphvh"] Nov 25 12:05:13 crc kubenswrapper[4706]: I1125 12:05:13.042835 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-d4d1-account-create-lphvh"] Nov 25 12:05:13 crc kubenswrapper[4706]: I1125 12:05:13.933023 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="001d7afd-ffff-43e2-8463-3ebe29200b80" path="/var/lib/kubelet/pods/001d7afd-ffff-43e2-8463-3ebe29200b80/volumes" Nov 25 12:05:15 crc kubenswrapper[4706]: I1125 12:05:15.922877 4706 scope.go:117] "RemoveContainer" containerID="0a0bdee99cfe03b615e21edca20e8cd5d2aec43e4e69d2e5c17d3666e93d6426" Nov 25 12:05:15 crc kubenswrapper[4706]: E1125 12:05:15.923797 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:05:28 crc kubenswrapper[4706]: I1125 12:05:28.922444 4706 scope.go:117] "RemoveContainer" containerID="0a0bdee99cfe03b615e21edca20e8cd5d2aec43e4e69d2e5c17d3666e93d6426" Nov 25 12:05:28 crc kubenswrapper[4706]: E1125 12:05:28.923110 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:05:34 crc kubenswrapper[4706]: I1125 12:05:34.040503 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-r89ww"] Nov 25 12:05:34 crc kubenswrapper[4706]: I1125 12:05:34.053828 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-r89ww"] Nov 25 12:05:34 crc kubenswrapper[4706]: I1125 12:05:34.844917 4706 scope.go:117] "RemoveContainer" containerID="66e6568b2e32dd6e98388f8f63cd51ba450fc0656a9e433cd5c1306c071ae803" Nov 25 12:05:34 crc kubenswrapper[4706]: I1125 12:05:34.903451 4706 scope.go:117] "RemoveContainer" containerID="a9b22d077dc8d7251a770820974f5fca5e31586208ed1fd3433467b82d3ded33" Nov 25 12:05:34 crc kubenswrapper[4706]: I1125 12:05:34.930400 4706 scope.go:117] "RemoveContainer" containerID="4b05b750bd5e156e6419d06cfe9cccb45d24544adbd9d61912c0314da0e76c0a" Nov 25 12:05:34 crc kubenswrapper[4706]: I1125 12:05:34.982723 4706 scope.go:117] "RemoveContainer" containerID="19caabc0e2660fdd5ec42d86887749bbd1c96c6b400d26e5fb5ae61ba61d0e35" Nov 25 12:05:35 crc kubenswrapper[4706]: I1125 12:05:35.037741 4706 scope.go:117] "RemoveContainer" containerID="3d732de07d9f48d070985cccb3531cd141efc1c2c79f1004e80d44efc990f7ce" Nov 25 12:05:35 crc kubenswrapper[4706]: I1125 12:05:35.088947 4706 scope.go:117] "RemoveContainer" containerID="fd47cc12bff940b7738429622128cd1a4a7da6827de28a0cd21b35b4bc4a1a19" Nov 25 12:05:35 crc kubenswrapper[4706]: I1125 12:05:35.145531 4706 scope.go:117] "RemoveContainer" containerID="371f657ce63d0845cb468e81e285d773fc879c04e084353cb247f4bd6451f9e0" Nov 25 12:05:35 crc kubenswrapper[4706]: I1125 12:05:35.169665 4706 scope.go:117] "RemoveContainer" containerID="708280f842bd81c3ef09736c2d734c9f5267b8d7e3526224830848e6d3aed37c" Nov 25 12:05:35 crc kubenswrapper[4706]: I1125 12:05:35.202529 4706 scope.go:117] "RemoveContainer" containerID="9c58be95ca4b624911c56f14e8fc3aa990af582ea2f1f7f42502ceb6656e23da" Nov 25 12:05:35 crc kubenswrapper[4706]: I1125 12:05:35.229056 4706 scope.go:117] "RemoveContainer" containerID="b629c01e730bfb5919089131041fb4c64e0ce2e075ff2dbd6f5e5c35d450ba7f" Nov 25 12:05:35 crc kubenswrapper[4706]: I1125 12:05:35.252357 4706 scope.go:117] "RemoveContainer" containerID="f8f41500e05a3bb352954658e334fa9564af44a52176845951cf369e98ab2dfc" Nov 25 12:05:35 crc kubenswrapper[4706]: I1125 12:05:35.934205 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ec71b1d-86a6-4028-959d-6097b0bc6ed2" path="/var/lib/kubelet/pods/3ec71b1d-86a6-4028-959d-6097b0bc6ed2/volumes" Nov 25 12:05:41 crc kubenswrapper[4706]: I1125 12:05:41.930164 4706 scope.go:117] "RemoveContainer" containerID="0a0bdee99cfe03b615e21edca20e8cd5d2aec43e4e69d2e5c17d3666e93d6426" Nov 25 12:05:41 crc kubenswrapper[4706]: E1125 12:05:41.931072 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:05:56 crc kubenswrapper[4706]: I1125 12:05:56.922467 4706 scope.go:117] "RemoveContainer" containerID="0a0bdee99cfe03b615e21edca20e8cd5d2aec43e4e69d2e5c17d3666e93d6426" Nov 25 12:05:56 crc kubenswrapper[4706]: E1125 12:05:56.923950 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:06:09 crc kubenswrapper[4706]: I1125 12:06:09.922770 4706 scope.go:117] "RemoveContainer" containerID="0a0bdee99cfe03b615e21edca20e8cd5d2aec43e4e69d2e5c17d3666e93d6426" Nov 25 12:06:09 crc kubenswrapper[4706]: E1125 12:06:09.923741 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:06:20 crc kubenswrapper[4706]: I1125 12:06:20.047646 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-xbn9h"] Nov 25 12:06:20 crc kubenswrapper[4706]: I1125 12:06:20.058546 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-xbn9h"] Nov 25 12:06:20 crc kubenswrapper[4706]: I1125 12:06:20.068620 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-ntkr9"] Nov 25 12:06:20 crc kubenswrapper[4706]: I1125 12:06:20.079871 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-ntkr9"] Nov 25 12:06:21 crc kubenswrapper[4706]: I1125 12:06:21.933769 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4586fb7b-8269-4dca-87d4-f3c66518b999" path="/var/lib/kubelet/pods/4586fb7b-8269-4dca-87d4-f3c66518b999/volumes" Nov 25 12:06:21 crc kubenswrapper[4706]: I1125 12:06:21.934982 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fff3e0d5-0608-4e15-9a92-376b6a2b7d17" path="/var/lib/kubelet/pods/fff3e0d5-0608-4e15-9a92-376b6a2b7d17/volumes" Nov 25 12:06:23 crc kubenswrapper[4706]: I1125 12:06:23.922713 4706 scope.go:117] "RemoveContainer" containerID="0a0bdee99cfe03b615e21edca20e8cd5d2aec43e4e69d2e5c17d3666e93d6426" Nov 25 12:06:23 crc kubenswrapper[4706]: E1125 12:06:23.923173 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:06:28 crc kubenswrapper[4706]: I1125 12:06:28.029499 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-v6lvb"] Nov 25 12:06:28 crc kubenswrapper[4706]: I1125 12:06:28.038422 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-v6lvb"] Nov 25 12:06:29 crc kubenswrapper[4706]: I1125 12:06:29.987036 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08ef6ec0-ba09-40a2-94d0-a1ddbba8644a" path="/var/lib/kubelet/pods/08ef6ec0-ba09-40a2-94d0-a1ddbba8644a/volumes" Nov 25 12:06:31 crc kubenswrapper[4706]: I1125 12:06:31.037278 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-fd7sf"] Nov 25 12:06:31 crc kubenswrapper[4706]: I1125 12:06:31.046491 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-hdbbw"] Nov 25 12:06:31 crc kubenswrapper[4706]: I1125 12:06:31.057505 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-hdbbw"] Nov 25 12:06:31 crc kubenswrapper[4706]: I1125 12:06:31.068273 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-fd7sf"] Nov 25 12:06:31 crc kubenswrapper[4706]: I1125 12:06:31.940752 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf" path="/var/lib/kubelet/pods/27e5b2d0-6fcf-4fb5-8bc4-e086370f5eaf/volumes" Nov 25 12:06:31 crc kubenswrapper[4706]: I1125 12:06:31.941627 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="424f303d-41b7-4fd6-be4a-017148ed95da" path="/var/lib/kubelet/pods/424f303d-41b7-4fd6-be4a-017148ed95da/volumes" Nov 25 12:06:35 crc kubenswrapper[4706]: I1125 12:06:35.460438 4706 scope.go:117] "RemoveContainer" containerID="797c773a68a2cefa511a2d83c42ec2cf0c6e8966351b19ccb7c9050e4a68b766" Nov 25 12:06:35 crc kubenswrapper[4706]: I1125 12:06:35.494442 4706 scope.go:117] "RemoveContainer" containerID="6b810764d35ead1f050b80c6c6624b912e1e9a1ea6ace0dac10af543213a2552" Nov 25 12:06:35 crc kubenswrapper[4706]: I1125 12:06:35.558739 4706 scope.go:117] "RemoveContainer" containerID="69b75dc8ced52c1b496484cab28676106b2584ed034f5af05537be0814a73094" Nov 25 12:06:35 crc kubenswrapper[4706]: I1125 12:06:35.606590 4706 scope.go:117] "RemoveContainer" containerID="b8cd4f92181148c7007b306dbbc97580d58c985b6efadc9a9ba7e404965311ab" Nov 25 12:06:35 crc kubenswrapper[4706]: I1125 12:06:35.648807 4706 scope.go:117] "RemoveContainer" containerID="5d06646f2e40933938174b706f1cbfb7279ba1f4da52a991d69893ade768872e" Nov 25 12:06:35 crc kubenswrapper[4706]: I1125 12:06:35.696886 4706 scope.go:117] "RemoveContainer" containerID="ed658060da60348d51178754a8fc3e5be804e83ded14e615faea142e1c49e58d" Nov 25 12:06:35 crc kubenswrapper[4706]: I1125 12:06:35.923264 4706 scope.go:117] "RemoveContainer" containerID="0a0bdee99cfe03b615e21edca20e8cd5d2aec43e4e69d2e5c17d3666e93d6426" Nov 25 12:06:35 crc kubenswrapper[4706]: E1125 12:06:35.923585 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:06:40 crc kubenswrapper[4706]: I1125 12:06:40.068861 4706 generic.go:334] "Generic (PLEG): container finished" podID="50dff0a2-b50d-43ee-8951-e49958b3cd5a" containerID="fbbcbf45f6e03ca44e598d7f255a31132d74782780346e4589288ab7db7b3bf4" exitCode=0 Nov 25 12:06:40 crc kubenswrapper[4706]: I1125 12:06:40.068960 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r" event={"ID":"50dff0a2-b50d-43ee-8951-e49958b3cd5a","Type":"ContainerDied","Data":"fbbcbf45f6e03ca44e598d7f255a31132d74782780346e4589288ab7db7b3bf4"} Nov 25 12:06:41 crc kubenswrapper[4706]: I1125 12:06:41.500470 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r" Nov 25 12:06:41 crc kubenswrapper[4706]: I1125 12:06:41.597934 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50dff0a2-b50d-43ee-8951-e49958b3cd5a-bootstrap-combined-ca-bundle\") pod \"50dff0a2-b50d-43ee-8951-e49958b3cd5a\" (UID: \"50dff0a2-b50d-43ee-8951-e49958b3cd5a\") " Nov 25 12:06:41 crc kubenswrapper[4706]: I1125 12:06:41.597967 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/50dff0a2-b50d-43ee-8951-e49958b3cd5a-ssh-key\") pod \"50dff0a2-b50d-43ee-8951-e49958b3cd5a\" (UID: \"50dff0a2-b50d-43ee-8951-e49958b3cd5a\") " Nov 25 12:06:41 crc kubenswrapper[4706]: I1125 12:06:41.597990 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz4zk\" (UniqueName: \"kubernetes.io/projected/50dff0a2-b50d-43ee-8951-e49958b3cd5a-kube-api-access-bz4zk\") pod \"50dff0a2-b50d-43ee-8951-e49958b3cd5a\" (UID: \"50dff0a2-b50d-43ee-8951-e49958b3cd5a\") " Nov 25 12:06:41 crc kubenswrapper[4706]: I1125 12:06:41.598044 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50dff0a2-b50d-43ee-8951-e49958b3cd5a-inventory\") pod \"50dff0a2-b50d-43ee-8951-e49958b3cd5a\" (UID: \"50dff0a2-b50d-43ee-8951-e49958b3cd5a\") " Nov 25 12:06:41 crc kubenswrapper[4706]: I1125 12:06:41.603411 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50dff0a2-b50d-43ee-8951-e49958b3cd5a-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "50dff0a2-b50d-43ee-8951-e49958b3cd5a" (UID: "50dff0a2-b50d-43ee-8951-e49958b3cd5a"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:06:41 crc kubenswrapper[4706]: I1125 12:06:41.604708 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50dff0a2-b50d-43ee-8951-e49958b3cd5a-kube-api-access-bz4zk" (OuterVolumeSpecName: "kube-api-access-bz4zk") pod "50dff0a2-b50d-43ee-8951-e49958b3cd5a" (UID: "50dff0a2-b50d-43ee-8951-e49958b3cd5a"). InnerVolumeSpecName "kube-api-access-bz4zk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:06:41 crc kubenswrapper[4706]: I1125 12:06:41.627909 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50dff0a2-b50d-43ee-8951-e49958b3cd5a-inventory" (OuterVolumeSpecName: "inventory") pod "50dff0a2-b50d-43ee-8951-e49958b3cd5a" (UID: "50dff0a2-b50d-43ee-8951-e49958b3cd5a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:06:41 crc kubenswrapper[4706]: I1125 12:06:41.629091 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50dff0a2-b50d-43ee-8951-e49958b3cd5a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "50dff0a2-b50d-43ee-8951-e49958b3cd5a" (UID: "50dff0a2-b50d-43ee-8951-e49958b3cd5a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:06:41 crc kubenswrapper[4706]: I1125 12:06:41.700131 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bz4zk\" (UniqueName: \"kubernetes.io/projected/50dff0a2-b50d-43ee-8951-e49958b3cd5a-kube-api-access-bz4zk\") on node \"crc\" DevicePath \"\"" Nov 25 12:06:41 crc kubenswrapper[4706]: I1125 12:06:41.700164 4706 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50dff0a2-b50d-43ee-8951-e49958b3cd5a-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:06:41 crc kubenswrapper[4706]: I1125 12:06:41.700173 4706 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/50dff0a2-b50d-43ee-8951-e49958b3cd5a-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 12:06:41 crc kubenswrapper[4706]: I1125 12:06:41.700184 4706 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50dff0a2-b50d-43ee-8951-e49958b3cd5a-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 12:06:42 crc kubenswrapper[4706]: I1125 12:06:42.091716 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r" event={"ID":"50dff0a2-b50d-43ee-8951-e49958b3cd5a","Type":"ContainerDied","Data":"42affd4c74b9374c629871fb5ba5eb45a5822ea77c8aabbe559af6d110fd680a"} Nov 25 12:06:42 crc kubenswrapper[4706]: I1125 12:06:42.091772 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42affd4c74b9374c629871fb5ba5eb45a5822ea77c8aabbe559af6d110fd680a" Nov 25 12:06:42 crc kubenswrapper[4706]: I1125 12:06:42.091788 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r" Nov 25 12:06:42 crc kubenswrapper[4706]: I1125 12:06:42.165487 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hvc8"] Nov 25 12:06:42 crc kubenswrapper[4706]: E1125 12:06:42.166039 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50dff0a2-b50d-43ee-8951-e49958b3cd5a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 12:06:42 crc kubenswrapper[4706]: I1125 12:06:42.166067 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="50dff0a2-b50d-43ee-8951-e49958b3cd5a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 12:06:42 crc kubenswrapper[4706]: I1125 12:06:42.166341 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="50dff0a2-b50d-43ee-8951-e49958b3cd5a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 12:06:42 crc kubenswrapper[4706]: I1125 12:06:42.167125 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hvc8" Nov 25 12:06:42 crc kubenswrapper[4706]: I1125 12:06:42.169671 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:06:42 crc kubenswrapper[4706]: I1125 12:06:42.169939 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 12:06:42 crc kubenswrapper[4706]: I1125 12:06:42.170115 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r8qqp" Nov 25 12:06:42 crc kubenswrapper[4706]: I1125 12:06:42.171092 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 12:06:42 crc kubenswrapper[4706]: I1125 12:06:42.176094 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hvc8"] Nov 25 12:06:42 crc kubenswrapper[4706]: I1125 12:06:42.209847 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c905bf42-3156-4c1f-8f93-4ab4c0141fdd-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9hvc8\" (UID: \"c905bf42-3156-4c1f-8f93-4ab4c0141fdd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hvc8" Nov 25 12:06:42 crc kubenswrapper[4706]: I1125 12:06:42.209984 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc9dd\" (UniqueName: \"kubernetes.io/projected/c905bf42-3156-4c1f-8f93-4ab4c0141fdd-kube-api-access-kc9dd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9hvc8\" (UID: \"c905bf42-3156-4c1f-8f93-4ab4c0141fdd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hvc8" Nov 25 12:06:42 crc kubenswrapper[4706]: I1125 12:06:42.210047 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c905bf42-3156-4c1f-8f93-4ab4c0141fdd-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9hvc8\" (UID: \"c905bf42-3156-4c1f-8f93-4ab4c0141fdd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hvc8" Nov 25 12:06:42 crc kubenswrapper[4706]: I1125 12:06:42.311863 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c905bf42-3156-4c1f-8f93-4ab4c0141fdd-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9hvc8\" (UID: \"c905bf42-3156-4c1f-8f93-4ab4c0141fdd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hvc8" Nov 25 12:06:42 crc kubenswrapper[4706]: I1125 12:06:42.311930 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc9dd\" (UniqueName: \"kubernetes.io/projected/c905bf42-3156-4c1f-8f93-4ab4c0141fdd-kube-api-access-kc9dd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9hvc8\" (UID: \"c905bf42-3156-4c1f-8f93-4ab4c0141fdd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hvc8" Nov 25 12:06:42 crc kubenswrapper[4706]: I1125 12:06:42.311964 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c905bf42-3156-4c1f-8f93-4ab4c0141fdd-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9hvc8\" (UID: \"c905bf42-3156-4c1f-8f93-4ab4c0141fdd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hvc8" Nov 25 12:06:42 crc kubenswrapper[4706]: I1125 12:06:42.315775 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c905bf42-3156-4c1f-8f93-4ab4c0141fdd-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9hvc8\" (UID: \"c905bf42-3156-4c1f-8f93-4ab4c0141fdd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hvc8" Nov 25 12:06:42 crc kubenswrapper[4706]: I1125 12:06:42.315833 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c905bf42-3156-4c1f-8f93-4ab4c0141fdd-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9hvc8\" (UID: \"c905bf42-3156-4c1f-8f93-4ab4c0141fdd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hvc8" Nov 25 12:06:42 crc kubenswrapper[4706]: I1125 12:06:42.331494 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc9dd\" (UniqueName: \"kubernetes.io/projected/c905bf42-3156-4c1f-8f93-4ab4c0141fdd-kube-api-access-kc9dd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9hvc8\" (UID: \"c905bf42-3156-4c1f-8f93-4ab4c0141fdd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hvc8" Nov 25 12:06:42 crc kubenswrapper[4706]: I1125 12:06:42.481770 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hvc8" Nov 25 12:06:42 crc kubenswrapper[4706]: I1125 12:06:42.997488 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hvc8"] Nov 25 12:06:43 crc kubenswrapper[4706]: I1125 12:06:43.005845 4706 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 12:06:43 crc kubenswrapper[4706]: I1125 12:06:43.101623 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hvc8" event={"ID":"c905bf42-3156-4c1f-8f93-4ab4c0141fdd","Type":"ContainerStarted","Data":"dfc79f079dee7a3d40efb535ebb1e1908ee78a11c0c37639f5804d792092b1c1"} Nov 25 12:06:44 crc kubenswrapper[4706]: I1125 12:06:44.111159 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hvc8" event={"ID":"c905bf42-3156-4c1f-8f93-4ab4c0141fdd","Type":"ContainerStarted","Data":"f592033c5d3008da921509fffcbea2514744b18e00c915786486225e058d2c1d"} Nov 25 12:06:44 crc kubenswrapper[4706]: I1125 12:06:44.130841 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hvc8" podStartSLOduration=1.490117524 podStartE2EDuration="2.130823515s" podCreationTimestamp="2025-11-25 12:06:42 +0000 UTC" firstStartedPulling="2025-11-25 12:06:43.005561628 +0000 UTC m=+1811.920119009" lastFinishedPulling="2025-11-25 12:06:43.646267619 +0000 UTC m=+1812.560825000" observedRunningTime="2025-11-25 12:06:44.125001878 +0000 UTC m=+1813.039559259" watchObservedRunningTime="2025-11-25 12:06:44.130823515 +0000 UTC m=+1813.045380896" Nov 25 12:06:46 crc kubenswrapper[4706]: I1125 12:06:46.921880 4706 scope.go:117] "RemoveContainer" containerID="0a0bdee99cfe03b615e21edca20e8cd5d2aec43e4e69d2e5c17d3666e93d6426" Nov 25 12:06:46 crc kubenswrapper[4706]: E1125 12:06:46.922173 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:06:59 crc kubenswrapper[4706]: I1125 12:06:59.923571 4706 scope.go:117] "RemoveContainer" containerID="0a0bdee99cfe03b615e21edca20e8cd5d2aec43e4e69d2e5c17d3666e93d6426" Nov 25 12:06:59 crc kubenswrapper[4706]: E1125 12:06:59.924384 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:07:14 crc kubenswrapper[4706]: I1125 12:07:14.922558 4706 scope.go:117] "RemoveContainer" containerID="0a0bdee99cfe03b615e21edca20e8cd5d2aec43e4e69d2e5c17d3666e93d6426" Nov 25 12:07:14 crc kubenswrapper[4706]: E1125 12:07:14.923346 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:07:25 crc kubenswrapper[4706]: I1125 12:07:25.922230 4706 scope.go:117] "RemoveContainer" containerID="0a0bdee99cfe03b615e21edca20e8cd5d2aec43e4e69d2e5c17d3666e93d6426" Nov 25 12:07:25 crc kubenswrapper[4706]: E1125 12:07:25.923344 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:07:26 crc kubenswrapper[4706]: I1125 12:07:26.066514 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-ctmr9"] Nov 25 12:07:26 crc kubenswrapper[4706]: I1125 12:07:26.078174 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-ctmr9"] Nov 25 12:07:27 crc kubenswrapper[4706]: I1125 12:07:27.034769 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7393-account-create-9cnk4"] Nov 25 12:07:27 crc kubenswrapper[4706]: I1125 12:07:27.048320 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-p4np9"] Nov 25 12:07:27 crc kubenswrapper[4706]: I1125 12:07:27.064184 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-j8qcn"] Nov 25 12:07:27 crc kubenswrapper[4706]: I1125 12:07:27.077894 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-c6da-account-create-p9tnk"] Nov 25 12:07:27 crc kubenswrapper[4706]: I1125 12:07:27.092750 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-j8qcn"] Nov 25 12:07:27 crc kubenswrapper[4706]: I1125 12:07:27.104867 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-p4np9"] Nov 25 12:07:27 crc kubenswrapper[4706]: I1125 12:07:27.115376 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-7393-account-create-9cnk4"] Nov 25 12:07:27 crc kubenswrapper[4706]: I1125 12:07:27.125098 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-c6da-account-create-p9tnk"] Nov 25 12:07:27 crc kubenswrapper[4706]: I1125 12:07:27.936887 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="030673ef-ec79-4f19-8f0e-765d6918cfc4" path="/var/lib/kubelet/pods/030673ef-ec79-4f19-8f0e-765d6918cfc4/volumes" Nov 25 12:07:27 crc kubenswrapper[4706]: I1125 12:07:27.937489 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b85308a-ef27-494f-9bd3-b06c25118779" path="/var/lib/kubelet/pods/2b85308a-ef27-494f-9bd3-b06c25118779/volumes" Nov 25 12:07:27 crc kubenswrapper[4706]: I1125 12:07:27.938053 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cf51224-9407-44c8-805f-fcf18fa531a3" path="/var/lib/kubelet/pods/5cf51224-9407-44c8-805f-fcf18fa531a3/volumes" Nov 25 12:07:27 crc kubenswrapper[4706]: I1125 12:07:27.938617 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64c51a3f-220f-4d41-a8ae-996c5d65da6a" path="/var/lib/kubelet/pods/64c51a3f-220f-4d41-a8ae-996c5d65da6a/volumes" Nov 25 12:07:27 crc kubenswrapper[4706]: I1125 12:07:27.939668 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed5f6b7c-b239-4aba-8c85-0ffdd29622da" path="/var/lib/kubelet/pods/ed5f6b7c-b239-4aba-8c85-0ffdd29622da/volumes" Nov 25 12:07:28 crc kubenswrapper[4706]: I1125 12:07:28.031678 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-c017-account-create-lsfhl"] Nov 25 12:07:28 crc kubenswrapper[4706]: I1125 12:07:28.042923 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-c017-account-create-lsfhl"] Nov 25 12:07:29 crc kubenswrapper[4706]: I1125 12:07:29.933486 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acb4725a-1a34-4a3a-b578-7bcf44ff0bef" path="/var/lib/kubelet/pods/acb4725a-1a34-4a3a-b578-7bcf44ff0bef/volumes" Nov 25 12:07:35 crc kubenswrapper[4706]: I1125 12:07:35.861027 4706 scope.go:117] "RemoveContainer" containerID="a34f9431fa22b2dc3c7b7f13ce3cbec17941009dd68dc7fea7df7ae915f18e01" Nov 25 12:07:35 crc kubenswrapper[4706]: I1125 12:07:35.941551 4706 scope.go:117] "RemoveContainer" containerID="e7d3108737da713897d8ab0532f1849a9ad5b4268db2f845f4aa68e039fae815" Nov 25 12:07:36 crc kubenswrapper[4706]: I1125 12:07:36.055735 4706 scope.go:117] "RemoveContainer" containerID="e901939ebf66885634d91216cfaa95a1b9d4c974734e90d8c89c16138110de14" Nov 25 12:07:36 crc kubenswrapper[4706]: I1125 12:07:36.086761 4706 scope.go:117] "RemoveContainer" containerID="219047ea03fefd7c8435a03c86efcef55b6d92b6b896bfc95e1ef026d7e2a4a4" Nov 25 12:07:36 crc kubenswrapper[4706]: I1125 12:07:36.175668 4706 scope.go:117] "RemoveContainer" containerID="88f1e76352714ce8c872235ff5a399be70da0ef7ea1a185268b10a6a9af56bf5" Nov 25 12:07:36 crc kubenswrapper[4706]: I1125 12:07:36.202079 4706 scope.go:117] "RemoveContainer" containerID="0016a1025cca850a91bb34fc6f50a9212f3a65a4f5be1bbde437a244faffa0de" Nov 25 12:07:38 crc kubenswrapper[4706]: I1125 12:07:38.922082 4706 scope.go:117] "RemoveContainer" containerID="0a0bdee99cfe03b615e21edca20e8cd5d2aec43e4e69d2e5c17d3666e93d6426" Nov 25 12:07:38 crc kubenswrapper[4706]: E1125 12:07:38.922915 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:07:53 crc kubenswrapper[4706]: I1125 12:07:53.924751 4706 scope.go:117] "RemoveContainer" containerID="0a0bdee99cfe03b615e21edca20e8cd5d2aec43e4e69d2e5c17d3666e93d6426" Nov 25 12:07:53 crc kubenswrapper[4706]: E1125 12:07:53.927162 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:07:57 crc kubenswrapper[4706]: I1125 12:07:57.225357 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zbtll"] Nov 25 12:07:57 crc kubenswrapper[4706]: I1125 12:07:57.237086 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zbtll"] Nov 25 12:07:57 crc kubenswrapper[4706]: I1125 12:07:57.943715 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="560816f0-4040-43a0-8a73-84500a0aad9c" path="/var/lib/kubelet/pods/560816f0-4040-43a0-8a73-84500a0aad9c/volumes" Nov 25 12:08:04 crc kubenswrapper[4706]: I1125 12:08:04.922695 4706 scope.go:117] "RemoveContainer" containerID="0a0bdee99cfe03b615e21edca20e8cd5d2aec43e4e69d2e5c17d3666e93d6426" Nov 25 12:08:06 crc kubenswrapper[4706]: I1125 12:08:06.017758 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" event={"ID":"0930887a-320c-4506-8c9c-f94d6d64516a","Type":"ContainerStarted","Data":"c3decbb72f251ff0268699ac4622382fd9d08b45caec2fd0b673ab3aae749803"} Nov 25 12:08:19 crc kubenswrapper[4706]: I1125 12:08:19.039001 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-cdzkl"] Nov 25 12:08:19 crc kubenswrapper[4706]: I1125 12:08:19.076597 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-cdzkl"] Nov 25 12:08:19 crc kubenswrapper[4706]: I1125 12:08:19.931985 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b100f787-7064-4cac-b5dc-0267ee51f1aa" path="/var/lib/kubelet/pods/b100f787-7064-4cac-b5dc-0267ee51f1aa/volumes" Nov 25 12:08:22 crc kubenswrapper[4706]: I1125 12:08:22.040923 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-87sfg"] Nov 25 12:08:22 crc kubenswrapper[4706]: I1125 12:08:22.052046 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-87sfg"] Nov 25 12:08:23 crc kubenswrapper[4706]: I1125 12:08:23.946950 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca66dab3-01b2-4fac-b6c9-c09b2704a670" path="/var/lib/kubelet/pods/ca66dab3-01b2-4fac-b6c9-c09b2704a670/volumes" Nov 25 12:08:36 crc kubenswrapper[4706]: I1125 12:08:36.378702 4706 scope.go:117] "RemoveContainer" containerID="e978fc4eb599d23fb4665edd61038317df66488740751866f98e330b61768338" Nov 25 12:08:36 crc kubenswrapper[4706]: I1125 12:08:36.408094 4706 scope.go:117] "RemoveContainer" containerID="4b542dd8549bbc7790762bc2bc2bdf7eeb3699a8d3a84560f1173658325b8b4a" Nov 25 12:08:36 crc kubenswrapper[4706]: I1125 12:08:36.473475 4706 scope.go:117] "RemoveContainer" containerID="3d698239778f79ff43be39ff91d4e11623e9e17b73d56d1ddfdf78cc933d6ca5" Nov 25 12:08:36 crc kubenswrapper[4706]: I1125 12:08:36.535165 4706 scope.go:117] "RemoveContainer" containerID="d5fd2f826df8fa3a76559d110ce0854768023982e4301ba7497f66b407f6cf6d" Nov 25 12:08:36 crc kubenswrapper[4706]: I1125 12:08:36.566173 4706 scope.go:117] "RemoveContainer" containerID="9b1df5c4ecad9cb3a75eba378c36c215fa265f87ab49ad1f7014d8f2630e77ff" Nov 25 12:08:36 crc kubenswrapper[4706]: I1125 12:08:36.604707 4706 scope.go:117] "RemoveContainer" containerID="3a709eace25238e86be74d86326ea1f6b1bf19eb76991c148775350e05599dbd" Nov 25 12:09:06 crc kubenswrapper[4706]: I1125 12:09:06.054332 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-8vfzt"] Nov 25 12:09:06 crc kubenswrapper[4706]: I1125 12:09:06.067254 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-8vfzt"] Nov 25 12:09:07 crc kubenswrapper[4706]: I1125 12:09:07.934621 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0" path="/var/lib/kubelet/pods/0c8ef478-be1a-4f0b-a052-aa2a2ad96cf0/volumes" Nov 25 12:09:10 crc kubenswrapper[4706]: I1125 12:09:10.708610 4706 generic.go:334] "Generic (PLEG): container finished" podID="c905bf42-3156-4c1f-8f93-4ab4c0141fdd" containerID="f592033c5d3008da921509fffcbea2514744b18e00c915786486225e058d2c1d" exitCode=0 Nov 25 12:09:10 crc kubenswrapper[4706]: I1125 12:09:10.708755 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hvc8" event={"ID":"c905bf42-3156-4c1f-8f93-4ab4c0141fdd","Type":"ContainerDied","Data":"f592033c5d3008da921509fffcbea2514744b18e00c915786486225e058d2c1d"} Nov 25 12:09:12 crc kubenswrapper[4706]: I1125 12:09:12.101432 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hvc8" Nov 25 12:09:12 crc kubenswrapper[4706]: I1125 12:09:12.198197 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c905bf42-3156-4c1f-8f93-4ab4c0141fdd-inventory\") pod \"c905bf42-3156-4c1f-8f93-4ab4c0141fdd\" (UID: \"c905bf42-3156-4c1f-8f93-4ab4c0141fdd\") " Nov 25 12:09:12 crc kubenswrapper[4706]: I1125 12:09:12.198538 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc9dd\" (UniqueName: \"kubernetes.io/projected/c905bf42-3156-4c1f-8f93-4ab4c0141fdd-kube-api-access-kc9dd\") pod \"c905bf42-3156-4c1f-8f93-4ab4c0141fdd\" (UID: \"c905bf42-3156-4c1f-8f93-4ab4c0141fdd\") " Nov 25 12:09:12 crc kubenswrapper[4706]: I1125 12:09:12.198582 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c905bf42-3156-4c1f-8f93-4ab4c0141fdd-ssh-key\") pod \"c905bf42-3156-4c1f-8f93-4ab4c0141fdd\" (UID: \"c905bf42-3156-4c1f-8f93-4ab4c0141fdd\") " Nov 25 12:09:12 crc kubenswrapper[4706]: I1125 12:09:12.203563 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c905bf42-3156-4c1f-8f93-4ab4c0141fdd-kube-api-access-kc9dd" (OuterVolumeSpecName: "kube-api-access-kc9dd") pod "c905bf42-3156-4c1f-8f93-4ab4c0141fdd" (UID: "c905bf42-3156-4c1f-8f93-4ab4c0141fdd"). InnerVolumeSpecName "kube-api-access-kc9dd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:09:12 crc kubenswrapper[4706]: I1125 12:09:12.226036 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c905bf42-3156-4c1f-8f93-4ab4c0141fdd-inventory" (OuterVolumeSpecName: "inventory") pod "c905bf42-3156-4c1f-8f93-4ab4c0141fdd" (UID: "c905bf42-3156-4c1f-8f93-4ab4c0141fdd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:09:12 crc kubenswrapper[4706]: I1125 12:09:12.228699 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c905bf42-3156-4c1f-8f93-4ab4c0141fdd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c905bf42-3156-4c1f-8f93-4ab4c0141fdd" (UID: "c905bf42-3156-4c1f-8f93-4ab4c0141fdd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:09:12 crc kubenswrapper[4706]: I1125 12:09:12.301057 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kc9dd\" (UniqueName: \"kubernetes.io/projected/c905bf42-3156-4c1f-8f93-4ab4c0141fdd-kube-api-access-kc9dd\") on node \"crc\" DevicePath \"\"" Nov 25 12:09:12 crc kubenswrapper[4706]: I1125 12:09:12.301100 4706 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c905bf42-3156-4c1f-8f93-4ab4c0141fdd-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 12:09:12 crc kubenswrapper[4706]: I1125 12:09:12.301117 4706 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c905bf42-3156-4c1f-8f93-4ab4c0141fdd-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 12:09:12 crc kubenswrapper[4706]: I1125 12:09:12.730404 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hvc8" event={"ID":"c905bf42-3156-4c1f-8f93-4ab4c0141fdd","Type":"ContainerDied","Data":"dfc79f079dee7a3d40efb535ebb1e1908ee78a11c0c37639f5804d792092b1c1"} Nov 25 12:09:12 crc kubenswrapper[4706]: I1125 12:09:12.730716 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfc79f079dee7a3d40efb535ebb1e1908ee78a11c0c37639f5804d792092b1c1" Nov 25 12:09:12 crc kubenswrapper[4706]: I1125 12:09:12.730464 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hvc8" Nov 25 12:09:12 crc kubenswrapper[4706]: I1125 12:09:12.823265 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wtp98"] Nov 25 12:09:12 crc kubenswrapper[4706]: E1125 12:09:12.823727 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c905bf42-3156-4c1f-8f93-4ab4c0141fdd" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 25 12:09:12 crc kubenswrapper[4706]: I1125 12:09:12.823749 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="c905bf42-3156-4c1f-8f93-4ab4c0141fdd" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 25 12:09:12 crc kubenswrapper[4706]: I1125 12:09:12.823988 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="c905bf42-3156-4c1f-8f93-4ab4c0141fdd" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 25 12:09:12 crc kubenswrapper[4706]: I1125 12:09:12.824821 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wtp98" Nov 25 12:09:12 crc kubenswrapper[4706]: I1125 12:09:12.831080 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 12:09:12 crc kubenswrapper[4706]: I1125 12:09:12.831103 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 12:09:12 crc kubenswrapper[4706]: I1125 12:09:12.831341 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r8qqp" Nov 25 12:09:12 crc kubenswrapper[4706]: I1125 12:09:12.831386 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:09:12 crc kubenswrapper[4706]: I1125 12:09:12.843766 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wtp98"] Nov 25 12:09:13 crc kubenswrapper[4706]: I1125 12:09:13.015210 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81138548-0b1d-43b6-af7c-fdf31598a28d-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wtp98\" (UID: \"81138548-0b1d-43b6-af7c-fdf31598a28d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wtp98" Nov 25 12:09:13 crc kubenswrapper[4706]: I1125 12:09:13.015744 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81138548-0b1d-43b6-af7c-fdf31598a28d-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wtp98\" (UID: \"81138548-0b1d-43b6-af7c-fdf31598a28d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wtp98" Nov 25 12:09:13 crc kubenswrapper[4706]: I1125 12:09:13.015820 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9rt6\" (UniqueName: \"kubernetes.io/projected/81138548-0b1d-43b6-af7c-fdf31598a28d-kube-api-access-n9rt6\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wtp98\" (UID: \"81138548-0b1d-43b6-af7c-fdf31598a28d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wtp98" Nov 25 12:09:13 crc kubenswrapper[4706]: I1125 12:09:13.118059 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9rt6\" (UniqueName: \"kubernetes.io/projected/81138548-0b1d-43b6-af7c-fdf31598a28d-kube-api-access-n9rt6\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wtp98\" (UID: \"81138548-0b1d-43b6-af7c-fdf31598a28d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wtp98" Nov 25 12:09:13 crc kubenswrapper[4706]: I1125 12:09:13.118679 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81138548-0b1d-43b6-af7c-fdf31598a28d-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wtp98\" (UID: \"81138548-0b1d-43b6-af7c-fdf31598a28d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wtp98" Nov 25 12:09:13 crc kubenswrapper[4706]: I1125 12:09:13.119565 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81138548-0b1d-43b6-af7c-fdf31598a28d-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wtp98\" (UID: \"81138548-0b1d-43b6-af7c-fdf31598a28d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wtp98" Nov 25 12:09:13 crc kubenswrapper[4706]: I1125 12:09:13.133277 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81138548-0b1d-43b6-af7c-fdf31598a28d-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wtp98\" (UID: \"81138548-0b1d-43b6-af7c-fdf31598a28d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wtp98" Nov 25 12:09:13 crc kubenswrapper[4706]: I1125 12:09:13.133450 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81138548-0b1d-43b6-af7c-fdf31598a28d-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wtp98\" (UID: \"81138548-0b1d-43b6-af7c-fdf31598a28d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wtp98" Nov 25 12:09:13 crc kubenswrapper[4706]: I1125 12:09:13.136656 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9rt6\" (UniqueName: \"kubernetes.io/projected/81138548-0b1d-43b6-af7c-fdf31598a28d-kube-api-access-n9rt6\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wtp98\" (UID: \"81138548-0b1d-43b6-af7c-fdf31598a28d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wtp98" Nov 25 12:09:13 crc kubenswrapper[4706]: I1125 12:09:13.145800 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wtp98" Nov 25 12:09:13 crc kubenswrapper[4706]: I1125 12:09:13.683901 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wtp98"] Nov 25 12:09:13 crc kubenswrapper[4706]: I1125 12:09:13.740589 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wtp98" event={"ID":"81138548-0b1d-43b6-af7c-fdf31598a28d","Type":"ContainerStarted","Data":"98d6af7d571e9309f2f557cea9b92481e0029b648fb37c26246906c7891dade3"} Nov 25 12:09:14 crc kubenswrapper[4706]: I1125 12:09:14.770134 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wtp98" event={"ID":"81138548-0b1d-43b6-af7c-fdf31598a28d","Type":"ContainerStarted","Data":"d130b91928e273f744c1b512de839d832677247ce71c6d6213fd93233e06d134"} Nov 25 12:09:14 crc kubenswrapper[4706]: I1125 12:09:14.786860 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wtp98" podStartSLOduration=2.095158574 podStartE2EDuration="2.786843108s" podCreationTimestamp="2025-11-25 12:09:12 +0000 UTC" firstStartedPulling="2025-11-25 12:09:13.69492311 +0000 UTC m=+1962.609480501" lastFinishedPulling="2025-11-25 12:09:14.386607654 +0000 UTC m=+1963.301165035" observedRunningTime="2025-11-25 12:09:14.785931284 +0000 UTC m=+1963.700488665" watchObservedRunningTime="2025-11-25 12:09:14.786843108 +0000 UTC m=+1963.701400489" Nov 25 12:09:36 crc kubenswrapper[4706]: I1125 12:09:36.764511 4706 scope.go:117] "RemoveContainer" containerID="a341f1a73ca72b1d393cb86f7600862f027f84cc6c5a74fcd9888210c58daa4e" Nov 25 12:10:29 crc kubenswrapper[4706]: I1125 12:10:29.424957 4706 generic.go:334] "Generic (PLEG): container finished" podID="81138548-0b1d-43b6-af7c-fdf31598a28d" containerID="d130b91928e273f744c1b512de839d832677247ce71c6d6213fd93233e06d134" exitCode=0 Nov 25 12:10:29 crc kubenswrapper[4706]: I1125 12:10:29.426595 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wtp98" event={"ID":"81138548-0b1d-43b6-af7c-fdf31598a28d","Type":"ContainerDied","Data":"d130b91928e273f744c1b512de839d832677247ce71c6d6213fd93233e06d134"} Nov 25 12:10:30 crc kubenswrapper[4706]: I1125 12:10:30.875959 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wtp98" Nov 25 12:10:30 crc kubenswrapper[4706]: I1125 12:10:30.985969 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81138548-0b1d-43b6-af7c-fdf31598a28d-inventory\") pod \"81138548-0b1d-43b6-af7c-fdf31598a28d\" (UID: \"81138548-0b1d-43b6-af7c-fdf31598a28d\") " Nov 25 12:10:30 crc kubenswrapper[4706]: I1125 12:10:30.986161 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9rt6\" (UniqueName: \"kubernetes.io/projected/81138548-0b1d-43b6-af7c-fdf31598a28d-kube-api-access-n9rt6\") pod \"81138548-0b1d-43b6-af7c-fdf31598a28d\" (UID: \"81138548-0b1d-43b6-af7c-fdf31598a28d\") " Nov 25 12:10:30 crc kubenswrapper[4706]: I1125 12:10:30.986266 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81138548-0b1d-43b6-af7c-fdf31598a28d-ssh-key\") pod \"81138548-0b1d-43b6-af7c-fdf31598a28d\" (UID: \"81138548-0b1d-43b6-af7c-fdf31598a28d\") " Nov 25 12:10:30 crc kubenswrapper[4706]: I1125 12:10:30.991771 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81138548-0b1d-43b6-af7c-fdf31598a28d-kube-api-access-n9rt6" (OuterVolumeSpecName: "kube-api-access-n9rt6") pod "81138548-0b1d-43b6-af7c-fdf31598a28d" (UID: "81138548-0b1d-43b6-af7c-fdf31598a28d"). InnerVolumeSpecName "kube-api-access-n9rt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:10:31 crc kubenswrapper[4706]: I1125 12:10:31.018172 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81138548-0b1d-43b6-af7c-fdf31598a28d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "81138548-0b1d-43b6-af7c-fdf31598a28d" (UID: "81138548-0b1d-43b6-af7c-fdf31598a28d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:10:31 crc kubenswrapper[4706]: I1125 12:10:31.037654 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81138548-0b1d-43b6-af7c-fdf31598a28d-inventory" (OuterVolumeSpecName: "inventory") pod "81138548-0b1d-43b6-af7c-fdf31598a28d" (UID: "81138548-0b1d-43b6-af7c-fdf31598a28d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:10:31 crc kubenswrapper[4706]: I1125 12:10:31.088182 4706 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81138548-0b1d-43b6-af7c-fdf31598a28d-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 12:10:31 crc kubenswrapper[4706]: I1125 12:10:31.088212 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9rt6\" (UniqueName: \"kubernetes.io/projected/81138548-0b1d-43b6-af7c-fdf31598a28d-kube-api-access-n9rt6\") on node \"crc\" DevicePath \"\"" Nov 25 12:10:31 crc kubenswrapper[4706]: I1125 12:10:31.088221 4706 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81138548-0b1d-43b6-af7c-fdf31598a28d-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 12:10:31 crc kubenswrapper[4706]: I1125 12:10:31.125266 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:10:31 crc kubenswrapper[4706]: I1125 12:10:31.125394 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:10:31 crc kubenswrapper[4706]: I1125 12:10:31.448118 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wtp98" event={"ID":"81138548-0b1d-43b6-af7c-fdf31598a28d","Type":"ContainerDied","Data":"98d6af7d571e9309f2f557cea9b92481e0029b648fb37c26246906c7891dade3"} Nov 25 12:10:31 crc kubenswrapper[4706]: I1125 12:10:31.448160 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98d6af7d571e9309f2f557cea9b92481e0029b648fb37c26246906c7891dade3" Nov 25 12:10:31 crc kubenswrapper[4706]: I1125 12:10:31.448173 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wtp98" Nov 25 12:10:31 crc kubenswrapper[4706]: I1125 12:10:31.528119 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2j66d"] Nov 25 12:10:31 crc kubenswrapper[4706]: E1125 12:10:31.528881 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81138548-0b1d-43b6-af7c-fdf31598a28d" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 12:10:31 crc kubenswrapper[4706]: I1125 12:10:31.528899 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="81138548-0b1d-43b6-af7c-fdf31598a28d" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 12:10:31 crc kubenswrapper[4706]: I1125 12:10:31.529105 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="81138548-0b1d-43b6-af7c-fdf31598a28d" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 12:10:31 crc kubenswrapper[4706]: I1125 12:10:31.529729 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2j66d" Nov 25 12:10:31 crc kubenswrapper[4706]: I1125 12:10:31.531940 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r8qqp" Nov 25 12:10:31 crc kubenswrapper[4706]: I1125 12:10:31.532049 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:10:31 crc kubenswrapper[4706]: I1125 12:10:31.532159 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 12:10:31 crc kubenswrapper[4706]: I1125 12:10:31.532775 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 12:10:31 crc kubenswrapper[4706]: I1125 12:10:31.546453 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2j66d"] Nov 25 12:10:31 crc kubenswrapper[4706]: I1125 12:10:31.598355 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29e15319-39a4-4af6-869c-3f49b55997bc-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2j66d\" (UID: \"29e15319-39a4-4af6-869c-3f49b55997bc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2j66d" Nov 25 12:10:31 crc kubenswrapper[4706]: I1125 12:10:31.598424 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/29e15319-39a4-4af6-869c-3f49b55997bc-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2j66d\" (UID: \"29e15319-39a4-4af6-869c-3f49b55997bc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2j66d" Nov 25 12:10:31 crc kubenswrapper[4706]: I1125 12:10:31.598576 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxtp7\" (UniqueName: \"kubernetes.io/projected/29e15319-39a4-4af6-869c-3f49b55997bc-kube-api-access-vxtp7\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2j66d\" (UID: \"29e15319-39a4-4af6-869c-3f49b55997bc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2j66d" Nov 25 12:10:31 crc kubenswrapper[4706]: I1125 12:10:31.700440 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29e15319-39a4-4af6-869c-3f49b55997bc-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2j66d\" (UID: \"29e15319-39a4-4af6-869c-3f49b55997bc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2j66d" Nov 25 12:10:31 crc kubenswrapper[4706]: I1125 12:10:31.700567 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/29e15319-39a4-4af6-869c-3f49b55997bc-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2j66d\" (UID: \"29e15319-39a4-4af6-869c-3f49b55997bc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2j66d" Nov 25 12:10:31 crc kubenswrapper[4706]: I1125 12:10:31.700620 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxtp7\" (UniqueName: \"kubernetes.io/projected/29e15319-39a4-4af6-869c-3f49b55997bc-kube-api-access-vxtp7\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2j66d\" (UID: \"29e15319-39a4-4af6-869c-3f49b55997bc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2j66d" Nov 25 12:10:31 crc kubenswrapper[4706]: I1125 12:10:31.705417 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29e15319-39a4-4af6-869c-3f49b55997bc-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2j66d\" (UID: \"29e15319-39a4-4af6-869c-3f49b55997bc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2j66d" Nov 25 12:10:31 crc kubenswrapper[4706]: I1125 12:10:31.706325 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/29e15319-39a4-4af6-869c-3f49b55997bc-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2j66d\" (UID: \"29e15319-39a4-4af6-869c-3f49b55997bc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2j66d" Nov 25 12:10:31 crc kubenswrapper[4706]: I1125 12:10:31.723771 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxtp7\" (UniqueName: \"kubernetes.io/projected/29e15319-39a4-4af6-869c-3f49b55997bc-kube-api-access-vxtp7\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2j66d\" (UID: \"29e15319-39a4-4af6-869c-3f49b55997bc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2j66d" Nov 25 12:10:31 crc kubenswrapper[4706]: I1125 12:10:31.846753 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2j66d" Nov 25 12:10:32 crc kubenswrapper[4706]: I1125 12:10:32.438504 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2j66d"] Nov 25 12:10:32 crc kubenswrapper[4706]: I1125 12:10:32.456647 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2j66d" event={"ID":"29e15319-39a4-4af6-869c-3f49b55997bc","Type":"ContainerStarted","Data":"70a8fa423d2f23dd9b777c5f3ecf7aae949293bdf5e84de2a60eaa303d4ef6c8"} Nov 25 12:10:33 crc kubenswrapper[4706]: I1125 12:10:33.265325 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:10:34 crc kubenswrapper[4706]: I1125 12:10:34.479068 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2j66d" event={"ID":"29e15319-39a4-4af6-869c-3f49b55997bc","Type":"ContainerStarted","Data":"18d83607eba40f625c57a33a8fa8131eff5973db6d46062f2e3914202295511c"} Nov 25 12:10:34 crc kubenswrapper[4706]: I1125 12:10:34.496971 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2j66d" podStartSLOduration=2.6755565900000002 podStartE2EDuration="3.496949091s" podCreationTimestamp="2025-11-25 12:10:31 +0000 UTC" firstStartedPulling="2025-11-25 12:10:32.441537011 +0000 UTC m=+2041.356094392" lastFinishedPulling="2025-11-25 12:10:33.262929512 +0000 UTC m=+2042.177486893" observedRunningTime="2025-11-25 12:10:34.495213086 +0000 UTC m=+2043.409770467" watchObservedRunningTime="2025-11-25 12:10:34.496949091 +0000 UTC m=+2043.411506482" Nov 25 12:10:38 crc kubenswrapper[4706]: I1125 12:10:38.512853 4706 generic.go:334] "Generic (PLEG): container finished" podID="29e15319-39a4-4af6-869c-3f49b55997bc" containerID="18d83607eba40f625c57a33a8fa8131eff5973db6d46062f2e3914202295511c" exitCode=0 Nov 25 12:10:38 crc kubenswrapper[4706]: I1125 12:10:38.512934 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2j66d" event={"ID":"29e15319-39a4-4af6-869c-3f49b55997bc","Type":"ContainerDied","Data":"18d83607eba40f625c57a33a8fa8131eff5973db6d46062f2e3914202295511c"} Nov 25 12:10:39 crc kubenswrapper[4706]: I1125 12:10:39.990257 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2j66d" Nov 25 12:10:40 crc kubenswrapper[4706]: I1125 12:10:40.064218 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29e15319-39a4-4af6-869c-3f49b55997bc-inventory\") pod \"29e15319-39a4-4af6-869c-3f49b55997bc\" (UID: \"29e15319-39a4-4af6-869c-3f49b55997bc\") " Nov 25 12:10:40 crc kubenswrapper[4706]: I1125 12:10:40.064345 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxtp7\" (UniqueName: \"kubernetes.io/projected/29e15319-39a4-4af6-869c-3f49b55997bc-kube-api-access-vxtp7\") pod \"29e15319-39a4-4af6-869c-3f49b55997bc\" (UID: \"29e15319-39a4-4af6-869c-3f49b55997bc\") " Nov 25 12:10:40 crc kubenswrapper[4706]: I1125 12:10:40.064473 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/29e15319-39a4-4af6-869c-3f49b55997bc-ssh-key\") pod \"29e15319-39a4-4af6-869c-3f49b55997bc\" (UID: \"29e15319-39a4-4af6-869c-3f49b55997bc\") " Nov 25 12:10:40 crc kubenswrapper[4706]: I1125 12:10:40.069781 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29e15319-39a4-4af6-869c-3f49b55997bc-kube-api-access-vxtp7" (OuterVolumeSpecName: "kube-api-access-vxtp7") pod "29e15319-39a4-4af6-869c-3f49b55997bc" (UID: "29e15319-39a4-4af6-869c-3f49b55997bc"). InnerVolumeSpecName "kube-api-access-vxtp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:10:40 crc kubenswrapper[4706]: I1125 12:10:40.091120 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29e15319-39a4-4af6-869c-3f49b55997bc-inventory" (OuterVolumeSpecName: "inventory") pod "29e15319-39a4-4af6-869c-3f49b55997bc" (UID: "29e15319-39a4-4af6-869c-3f49b55997bc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:10:40 crc kubenswrapper[4706]: I1125 12:10:40.091599 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29e15319-39a4-4af6-869c-3f49b55997bc-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "29e15319-39a4-4af6-869c-3f49b55997bc" (UID: "29e15319-39a4-4af6-869c-3f49b55997bc"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:10:40 crc kubenswrapper[4706]: I1125 12:10:40.167118 4706 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29e15319-39a4-4af6-869c-3f49b55997bc-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 12:10:40 crc kubenswrapper[4706]: I1125 12:10:40.167152 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxtp7\" (UniqueName: \"kubernetes.io/projected/29e15319-39a4-4af6-869c-3f49b55997bc-kube-api-access-vxtp7\") on node \"crc\" DevicePath \"\"" Nov 25 12:10:40 crc kubenswrapper[4706]: I1125 12:10:40.167163 4706 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/29e15319-39a4-4af6-869c-3f49b55997bc-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 12:10:40 crc kubenswrapper[4706]: I1125 12:10:40.535528 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2j66d" event={"ID":"29e15319-39a4-4af6-869c-3f49b55997bc","Type":"ContainerDied","Data":"70a8fa423d2f23dd9b777c5f3ecf7aae949293bdf5e84de2a60eaa303d4ef6c8"} Nov 25 12:10:40 crc kubenswrapper[4706]: I1125 12:10:40.535569 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70a8fa423d2f23dd9b777c5f3ecf7aae949293bdf5e84de2a60eaa303d4ef6c8" Nov 25 12:10:40 crc kubenswrapper[4706]: I1125 12:10:40.535582 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2j66d" Nov 25 12:10:40 crc kubenswrapper[4706]: I1125 12:10:40.610894 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-zlncj"] Nov 25 12:10:40 crc kubenswrapper[4706]: E1125 12:10:40.611315 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29e15319-39a4-4af6-869c-3f49b55997bc" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 12:10:40 crc kubenswrapper[4706]: I1125 12:10:40.611340 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="29e15319-39a4-4af6-869c-3f49b55997bc" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 12:10:40 crc kubenswrapper[4706]: I1125 12:10:40.611579 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="29e15319-39a4-4af6-869c-3f49b55997bc" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 12:10:40 crc kubenswrapper[4706]: I1125 12:10:40.612373 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zlncj" Nov 25 12:10:40 crc kubenswrapper[4706]: I1125 12:10:40.614472 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:10:40 crc kubenswrapper[4706]: I1125 12:10:40.615015 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r8qqp" Nov 25 12:10:40 crc kubenswrapper[4706]: I1125 12:10:40.615082 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 12:10:40 crc kubenswrapper[4706]: I1125 12:10:40.615263 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 12:10:40 crc kubenswrapper[4706]: I1125 12:10:40.631349 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-zlncj"] Nov 25 12:10:40 crc kubenswrapper[4706]: I1125 12:10:40.677386 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5z94\" (UniqueName: \"kubernetes.io/projected/5f5a244b-95ce-4443-9951-780763117499-kube-api-access-d5z94\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zlncj\" (UID: \"5f5a244b-95ce-4443-9951-780763117499\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zlncj" Nov 25 12:10:40 crc kubenswrapper[4706]: I1125 12:10:40.677918 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f5a244b-95ce-4443-9951-780763117499-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zlncj\" (UID: \"5f5a244b-95ce-4443-9951-780763117499\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zlncj" Nov 25 12:10:40 crc kubenswrapper[4706]: I1125 12:10:40.678471 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f5a244b-95ce-4443-9951-780763117499-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zlncj\" (UID: \"5f5a244b-95ce-4443-9951-780763117499\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zlncj" Nov 25 12:10:40 crc kubenswrapper[4706]: I1125 12:10:40.779990 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f5a244b-95ce-4443-9951-780763117499-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zlncj\" (UID: \"5f5a244b-95ce-4443-9951-780763117499\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zlncj" Nov 25 12:10:40 crc kubenswrapper[4706]: I1125 12:10:40.780130 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5z94\" (UniqueName: \"kubernetes.io/projected/5f5a244b-95ce-4443-9951-780763117499-kube-api-access-d5z94\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zlncj\" (UID: \"5f5a244b-95ce-4443-9951-780763117499\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zlncj" Nov 25 12:10:40 crc kubenswrapper[4706]: I1125 12:10:40.780207 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f5a244b-95ce-4443-9951-780763117499-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zlncj\" (UID: \"5f5a244b-95ce-4443-9951-780763117499\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zlncj" Nov 25 12:10:40 crc kubenswrapper[4706]: I1125 12:10:40.784569 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f5a244b-95ce-4443-9951-780763117499-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zlncj\" (UID: \"5f5a244b-95ce-4443-9951-780763117499\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zlncj" Nov 25 12:10:40 crc kubenswrapper[4706]: I1125 12:10:40.784887 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f5a244b-95ce-4443-9951-780763117499-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zlncj\" (UID: \"5f5a244b-95ce-4443-9951-780763117499\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zlncj" Nov 25 12:10:40 crc kubenswrapper[4706]: I1125 12:10:40.804668 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5z94\" (UniqueName: \"kubernetes.io/projected/5f5a244b-95ce-4443-9951-780763117499-kube-api-access-d5z94\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zlncj\" (UID: \"5f5a244b-95ce-4443-9951-780763117499\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zlncj" Nov 25 12:10:40 crc kubenswrapper[4706]: I1125 12:10:40.929955 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zlncj" Nov 25 12:10:41 crc kubenswrapper[4706]: I1125 12:10:41.488958 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-zlncj"] Nov 25 12:10:41 crc kubenswrapper[4706]: I1125 12:10:41.562238 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zlncj" event={"ID":"5f5a244b-95ce-4443-9951-780763117499","Type":"ContainerStarted","Data":"4003c9bdc202331dfd6cebcc7f48373d08df50e1b6e2bd4f705589cfd1b71845"} Nov 25 12:10:43 crc kubenswrapper[4706]: I1125 12:10:43.584284 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zlncj" event={"ID":"5f5a244b-95ce-4443-9951-780763117499","Type":"ContainerStarted","Data":"779af982d7574fe76ec99c5822cec732e3be33f1180eab5d29f5e742d3aa1db3"} Nov 25 12:10:43 crc kubenswrapper[4706]: I1125 12:10:43.617774 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zlncj" podStartSLOduration=2.772978266 podStartE2EDuration="3.617746262s" podCreationTimestamp="2025-11-25 12:10:40 +0000 UTC" firstStartedPulling="2025-11-25 12:10:41.501025349 +0000 UTC m=+2050.415582730" lastFinishedPulling="2025-11-25 12:10:42.345793345 +0000 UTC m=+2051.260350726" observedRunningTime="2025-11-25 12:10:43.600856791 +0000 UTC m=+2052.515414192" watchObservedRunningTime="2025-11-25 12:10:43.617746262 +0000 UTC m=+2052.532303663" Nov 25 12:11:01 crc kubenswrapper[4706]: I1125 12:11:01.124885 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:11:01 crc kubenswrapper[4706]: I1125 12:11:01.125720 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:11:21 crc kubenswrapper[4706]: I1125 12:11:21.916933 4706 generic.go:334] "Generic (PLEG): container finished" podID="5f5a244b-95ce-4443-9951-780763117499" containerID="779af982d7574fe76ec99c5822cec732e3be33f1180eab5d29f5e742d3aa1db3" exitCode=0 Nov 25 12:11:21 crc kubenswrapper[4706]: I1125 12:11:21.917045 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zlncj" event={"ID":"5f5a244b-95ce-4443-9951-780763117499","Type":"ContainerDied","Data":"779af982d7574fe76ec99c5822cec732e3be33f1180eab5d29f5e742d3aa1db3"} Nov 25 12:11:23 crc kubenswrapper[4706]: I1125 12:11:23.321910 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zlncj" Nov 25 12:11:23 crc kubenswrapper[4706]: I1125 12:11:23.418594 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5z94\" (UniqueName: \"kubernetes.io/projected/5f5a244b-95ce-4443-9951-780763117499-kube-api-access-d5z94\") pod \"5f5a244b-95ce-4443-9951-780763117499\" (UID: \"5f5a244b-95ce-4443-9951-780763117499\") " Nov 25 12:11:23 crc kubenswrapper[4706]: I1125 12:11:23.418782 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f5a244b-95ce-4443-9951-780763117499-inventory\") pod \"5f5a244b-95ce-4443-9951-780763117499\" (UID: \"5f5a244b-95ce-4443-9951-780763117499\") " Nov 25 12:11:23 crc kubenswrapper[4706]: I1125 12:11:23.418821 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f5a244b-95ce-4443-9951-780763117499-ssh-key\") pod \"5f5a244b-95ce-4443-9951-780763117499\" (UID: \"5f5a244b-95ce-4443-9951-780763117499\") " Nov 25 12:11:23 crc kubenswrapper[4706]: I1125 12:11:23.424657 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f5a244b-95ce-4443-9951-780763117499-kube-api-access-d5z94" (OuterVolumeSpecName: "kube-api-access-d5z94") pod "5f5a244b-95ce-4443-9951-780763117499" (UID: "5f5a244b-95ce-4443-9951-780763117499"). InnerVolumeSpecName "kube-api-access-d5z94". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:11:23 crc kubenswrapper[4706]: I1125 12:11:23.447520 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f5a244b-95ce-4443-9951-780763117499-inventory" (OuterVolumeSpecName: "inventory") pod "5f5a244b-95ce-4443-9951-780763117499" (UID: "5f5a244b-95ce-4443-9951-780763117499"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:11:23 crc kubenswrapper[4706]: I1125 12:11:23.456462 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f5a244b-95ce-4443-9951-780763117499-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5f5a244b-95ce-4443-9951-780763117499" (UID: "5f5a244b-95ce-4443-9951-780763117499"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:11:23 crc kubenswrapper[4706]: I1125 12:11:23.520898 4706 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f5a244b-95ce-4443-9951-780763117499-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 12:11:23 crc kubenswrapper[4706]: I1125 12:11:23.520929 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5z94\" (UniqueName: \"kubernetes.io/projected/5f5a244b-95ce-4443-9951-780763117499-kube-api-access-d5z94\") on node \"crc\" DevicePath \"\"" Nov 25 12:11:23 crc kubenswrapper[4706]: I1125 12:11:23.520941 4706 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f5a244b-95ce-4443-9951-780763117499-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 12:11:23 crc kubenswrapper[4706]: I1125 12:11:23.946292 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zlncj" event={"ID":"5f5a244b-95ce-4443-9951-780763117499","Type":"ContainerDied","Data":"4003c9bdc202331dfd6cebcc7f48373d08df50e1b6e2bd4f705589cfd1b71845"} Nov 25 12:11:23 crc kubenswrapper[4706]: I1125 12:11:23.946377 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4003c9bdc202331dfd6cebcc7f48373d08df50e1b6e2bd4f705589cfd1b71845" Nov 25 12:11:23 crc kubenswrapper[4706]: I1125 12:11:23.946474 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zlncj" Nov 25 12:11:24 crc kubenswrapper[4706]: I1125 12:11:24.030796 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h4crd"] Nov 25 12:11:24 crc kubenswrapper[4706]: E1125 12:11:24.031278 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f5a244b-95ce-4443-9951-780763117499" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 12:11:24 crc kubenswrapper[4706]: I1125 12:11:24.031319 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f5a244b-95ce-4443-9951-780763117499" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 12:11:24 crc kubenswrapper[4706]: I1125 12:11:24.031573 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f5a244b-95ce-4443-9951-780763117499" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 12:11:24 crc kubenswrapper[4706]: I1125 12:11:24.032374 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h4crd" Nov 25 12:11:24 crc kubenswrapper[4706]: I1125 12:11:24.036362 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:11:24 crc kubenswrapper[4706]: I1125 12:11:24.036591 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 12:11:24 crc kubenswrapper[4706]: I1125 12:11:24.036736 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 12:11:24 crc kubenswrapper[4706]: I1125 12:11:24.036915 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r8qqp" Nov 25 12:11:24 crc kubenswrapper[4706]: I1125 12:11:24.041780 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h4crd"] Nov 25 12:11:24 crc kubenswrapper[4706]: I1125 12:11:24.131013 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/04cc6fd1-5a4f-4d7d-aed4-849709bb005d-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h4crd\" (UID: \"04cc6fd1-5a4f-4d7d-aed4-849709bb005d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h4crd" Nov 25 12:11:24 crc kubenswrapper[4706]: I1125 12:11:24.131219 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04cc6fd1-5a4f-4d7d-aed4-849709bb005d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h4crd\" (UID: \"04cc6fd1-5a4f-4d7d-aed4-849709bb005d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h4crd" Nov 25 12:11:24 crc kubenswrapper[4706]: I1125 12:11:24.131388 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z95jz\" (UniqueName: \"kubernetes.io/projected/04cc6fd1-5a4f-4d7d-aed4-849709bb005d-kube-api-access-z95jz\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h4crd\" (UID: \"04cc6fd1-5a4f-4d7d-aed4-849709bb005d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h4crd" Nov 25 12:11:24 crc kubenswrapper[4706]: I1125 12:11:24.233340 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/04cc6fd1-5a4f-4d7d-aed4-849709bb005d-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h4crd\" (UID: \"04cc6fd1-5a4f-4d7d-aed4-849709bb005d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h4crd" Nov 25 12:11:24 crc kubenswrapper[4706]: I1125 12:11:24.233412 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04cc6fd1-5a4f-4d7d-aed4-849709bb005d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h4crd\" (UID: \"04cc6fd1-5a4f-4d7d-aed4-849709bb005d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h4crd" Nov 25 12:11:24 crc kubenswrapper[4706]: I1125 12:11:24.233463 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z95jz\" (UniqueName: \"kubernetes.io/projected/04cc6fd1-5a4f-4d7d-aed4-849709bb005d-kube-api-access-z95jz\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h4crd\" (UID: \"04cc6fd1-5a4f-4d7d-aed4-849709bb005d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h4crd" Nov 25 12:11:24 crc kubenswrapper[4706]: I1125 12:11:24.238451 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/04cc6fd1-5a4f-4d7d-aed4-849709bb005d-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h4crd\" (UID: \"04cc6fd1-5a4f-4d7d-aed4-849709bb005d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h4crd" Nov 25 12:11:24 crc kubenswrapper[4706]: I1125 12:11:24.238674 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04cc6fd1-5a4f-4d7d-aed4-849709bb005d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h4crd\" (UID: \"04cc6fd1-5a4f-4d7d-aed4-849709bb005d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h4crd" Nov 25 12:11:24 crc kubenswrapper[4706]: I1125 12:11:24.259794 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z95jz\" (UniqueName: \"kubernetes.io/projected/04cc6fd1-5a4f-4d7d-aed4-849709bb005d-kube-api-access-z95jz\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h4crd\" (UID: \"04cc6fd1-5a4f-4d7d-aed4-849709bb005d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h4crd" Nov 25 12:11:24 crc kubenswrapper[4706]: I1125 12:11:24.352386 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h4crd" Nov 25 12:11:24 crc kubenswrapper[4706]: I1125 12:11:24.873089 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h4crd"] Nov 25 12:11:24 crc kubenswrapper[4706]: I1125 12:11:24.958141 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h4crd" event={"ID":"04cc6fd1-5a4f-4d7d-aed4-849709bb005d","Type":"ContainerStarted","Data":"14d070ae0a3142f52040104912f20b9e32a9de15c5b83644ee8b974991ef4c4a"} Nov 25 12:11:25 crc kubenswrapper[4706]: I1125 12:11:25.971025 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h4crd" event={"ID":"04cc6fd1-5a4f-4d7d-aed4-849709bb005d","Type":"ContainerStarted","Data":"8be33e92f1dd97eb8c4a1871e695bf48e432b8dd1dc8460c03d1e0e97f5d9837"} Nov 25 12:11:25 crc kubenswrapper[4706]: I1125 12:11:25.997820 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h4crd" podStartSLOduration=1.490012528 podStartE2EDuration="1.997794734s" podCreationTimestamp="2025-11-25 12:11:24 +0000 UTC" firstStartedPulling="2025-11-25 12:11:24.878130319 +0000 UTC m=+2093.792687690" lastFinishedPulling="2025-11-25 12:11:25.385912525 +0000 UTC m=+2094.300469896" observedRunningTime="2025-11-25 12:11:25.986497916 +0000 UTC m=+2094.901055297" watchObservedRunningTime="2025-11-25 12:11:25.997794734 +0000 UTC m=+2094.912352115" Nov 25 12:11:31 crc kubenswrapper[4706]: I1125 12:11:31.125642 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:11:31 crc kubenswrapper[4706]: I1125 12:11:31.126182 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:11:31 crc kubenswrapper[4706]: I1125 12:11:31.126233 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" Nov 25 12:11:31 crc kubenswrapper[4706]: I1125 12:11:31.126954 4706 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c3decbb72f251ff0268699ac4622382fd9d08b45caec2fd0b673ab3aae749803"} pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 12:11:31 crc kubenswrapper[4706]: I1125 12:11:31.127007 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" containerID="cri-o://c3decbb72f251ff0268699ac4622382fd9d08b45caec2fd0b673ab3aae749803" gracePeriod=600 Nov 25 12:11:32 crc kubenswrapper[4706]: I1125 12:11:32.024210 4706 generic.go:334] "Generic (PLEG): container finished" podID="0930887a-320c-4506-8c9c-f94d6d64516a" containerID="c3decbb72f251ff0268699ac4622382fd9d08b45caec2fd0b673ab3aae749803" exitCode=0 Nov 25 12:11:32 crc kubenswrapper[4706]: I1125 12:11:32.024314 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" event={"ID":"0930887a-320c-4506-8c9c-f94d6d64516a","Type":"ContainerDied","Data":"c3decbb72f251ff0268699ac4622382fd9d08b45caec2fd0b673ab3aae749803"} Nov 25 12:11:32 crc kubenswrapper[4706]: I1125 12:11:32.024947 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" event={"ID":"0930887a-320c-4506-8c9c-f94d6d64516a","Type":"ContainerStarted","Data":"02f070302d64fff80ca8166389e9c6c4cebd1119d10a5d1848c1ade4b03a9e54"} Nov 25 12:11:32 crc kubenswrapper[4706]: I1125 12:11:32.024977 4706 scope.go:117] "RemoveContainer" containerID="0a0bdee99cfe03b615e21edca20e8cd5d2aec43e4e69d2e5c17d3666e93d6426" Nov 25 12:12:20 crc kubenswrapper[4706]: I1125 12:12:20.449596 4706 generic.go:334] "Generic (PLEG): container finished" podID="04cc6fd1-5a4f-4d7d-aed4-849709bb005d" containerID="8be33e92f1dd97eb8c4a1871e695bf48e432b8dd1dc8460c03d1e0e97f5d9837" exitCode=0 Nov 25 12:12:20 crc kubenswrapper[4706]: I1125 12:12:20.449715 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h4crd" event={"ID":"04cc6fd1-5a4f-4d7d-aed4-849709bb005d","Type":"ContainerDied","Data":"8be33e92f1dd97eb8c4a1871e695bf48e432b8dd1dc8460c03d1e0e97f5d9837"} Nov 25 12:12:21 crc kubenswrapper[4706]: I1125 12:12:21.906140 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h4crd" Nov 25 12:12:22 crc kubenswrapper[4706]: I1125 12:12:22.088382 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04cc6fd1-5a4f-4d7d-aed4-849709bb005d-inventory\") pod \"04cc6fd1-5a4f-4d7d-aed4-849709bb005d\" (UID: \"04cc6fd1-5a4f-4d7d-aed4-849709bb005d\") " Nov 25 12:12:22 crc kubenswrapper[4706]: I1125 12:12:22.088480 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z95jz\" (UniqueName: \"kubernetes.io/projected/04cc6fd1-5a4f-4d7d-aed4-849709bb005d-kube-api-access-z95jz\") pod \"04cc6fd1-5a4f-4d7d-aed4-849709bb005d\" (UID: \"04cc6fd1-5a4f-4d7d-aed4-849709bb005d\") " Nov 25 12:12:22 crc kubenswrapper[4706]: I1125 12:12:22.088623 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/04cc6fd1-5a4f-4d7d-aed4-849709bb005d-ssh-key\") pod \"04cc6fd1-5a4f-4d7d-aed4-849709bb005d\" (UID: \"04cc6fd1-5a4f-4d7d-aed4-849709bb005d\") " Nov 25 12:12:22 crc kubenswrapper[4706]: I1125 12:12:22.094906 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04cc6fd1-5a4f-4d7d-aed4-849709bb005d-kube-api-access-z95jz" (OuterVolumeSpecName: "kube-api-access-z95jz") pod "04cc6fd1-5a4f-4d7d-aed4-849709bb005d" (UID: "04cc6fd1-5a4f-4d7d-aed4-849709bb005d"). InnerVolumeSpecName "kube-api-access-z95jz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:12:22 crc kubenswrapper[4706]: I1125 12:12:22.126588 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04cc6fd1-5a4f-4d7d-aed4-849709bb005d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "04cc6fd1-5a4f-4d7d-aed4-849709bb005d" (UID: "04cc6fd1-5a4f-4d7d-aed4-849709bb005d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:12:22 crc kubenswrapper[4706]: I1125 12:12:22.135408 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04cc6fd1-5a4f-4d7d-aed4-849709bb005d-inventory" (OuterVolumeSpecName: "inventory") pod "04cc6fd1-5a4f-4d7d-aed4-849709bb005d" (UID: "04cc6fd1-5a4f-4d7d-aed4-849709bb005d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:12:22 crc kubenswrapper[4706]: I1125 12:12:22.190934 4706 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04cc6fd1-5a4f-4d7d-aed4-849709bb005d-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 12:12:22 crc kubenswrapper[4706]: I1125 12:12:22.190997 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z95jz\" (UniqueName: \"kubernetes.io/projected/04cc6fd1-5a4f-4d7d-aed4-849709bb005d-kube-api-access-z95jz\") on node \"crc\" DevicePath \"\"" Nov 25 12:12:22 crc kubenswrapper[4706]: I1125 12:12:22.191014 4706 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/04cc6fd1-5a4f-4d7d-aed4-849709bb005d-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 12:12:22 crc kubenswrapper[4706]: I1125 12:12:22.476267 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h4crd" event={"ID":"04cc6fd1-5a4f-4d7d-aed4-849709bb005d","Type":"ContainerDied","Data":"14d070ae0a3142f52040104912f20b9e32a9de15c5b83644ee8b974991ef4c4a"} Nov 25 12:12:22 crc kubenswrapper[4706]: I1125 12:12:22.476336 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14d070ae0a3142f52040104912f20b9e32a9de15c5b83644ee8b974991ef4c4a" Nov 25 12:12:22 crc kubenswrapper[4706]: I1125 12:12:22.476342 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h4crd" Nov 25 12:12:22 crc kubenswrapper[4706]: I1125 12:12:22.569974 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-d2qht"] Nov 25 12:12:22 crc kubenswrapper[4706]: E1125 12:12:22.570465 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04cc6fd1-5a4f-4d7d-aed4-849709bb005d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 12:12:22 crc kubenswrapper[4706]: I1125 12:12:22.570492 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="04cc6fd1-5a4f-4d7d-aed4-849709bb005d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 12:12:22 crc kubenswrapper[4706]: I1125 12:12:22.570750 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="04cc6fd1-5a4f-4d7d-aed4-849709bb005d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 12:12:22 crc kubenswrapper[4706]: I1125 12:12:22.571554 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-d2qht" Nov 25 12:12:22 crc kubenswrapper[4706]: I1125 12:12:22.574073 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:12:22 crc kubenswrapper[4706]: I1125 12:12:22.574108 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 12:12:22 crc kubenswrapper[4706]: I1125 12:12:22.574657 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r8qqp" Nov 25 12:12:22 crc kubenswrapper[4706]: I1125 12:12:22.577073 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 12:12:22 crc kubenswrapper[4706]: I1125 12:12:22.604928 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-d2qht"] Nov 25 12:12:22 crc kubenswrapper[4706]: I1125 12:12:22.700861 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab590c42-c26e-49b8-8fd1-e1c535dd7e8c-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-d2qht\" (UID: \"ab590c42-c26e-49b8-8fd1-e1c535dd7e8c\") " pod="openstack/ssh-known-hosts-edpm-deployment-d2qht" Nov 25 12:12:22 crc kubenswrapper[4706]: I1125 12:12:22.700954 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4jf9\" (UniqueName: \"kubernetes.io/projected/ab590c42-c26e-49b8-8fd1-e1c535dd7e8c-kube-api-access-m4jf9\") pod \"ssh-known-hosts-edpm-deployment-d2qht\" (UID: \"ab590c42-c26e-49b8-8fd1-e1c535dd7e8c\") " pod="openstack/ssh-known-hosts-edpm-deployment-d2qht" Nov 25 12:12:22 crc kubenswrapper[4706]: I1125 12:12:22.700991 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ab590c42-c26e-49b8-8fd1-e1c535dd7e8c-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-d2qht\" (UID: \"ab590c42-c26e-49b8-8fd1-e1c535dd7e8c\") " pod="openstack/ssh-known-hosts-edpm-deployment-d2qht" Nov 25 12:12:22 crc kubenswrapper[4706]: I1125 12:12:22.803588 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab590c42-c26e-49b8-8fd1-e1c535dd7e8c-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-d2qht\" (UID: \"ab590c42-c26e-49b8-8fd1-e1c535dd7e8c\") " pod="openstack/ssh-known-hosts-edpm-deployment-d2qht" Nov 25 12:12:22 crc kubenswrapper[4706]: I1125 12:12:22.803723 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4jf9\" (UniqueName: \"kubernetes.io/projected/ab590c42-c26e-49b8-8fd1-e1c535dd7e8c-kube-api-access-m4jf9\") pod \"ssh-known-hosts-edpm-deployment-d2qht\" (UID: \"ab590c42-c26e-49b8-8fd1-e1c535dd7e8c\") " pod="openstack/ssh-known-hosts-edpm-deployment-d2qht" Nov 25 12:12:22 crc kubenswrapper[4706]: I1125 12:12:22.803768 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ab590c42-c26e-49b8-8fd1-e1c535dd7e8c-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-d2qht\" (UID: \"ab590c42-c26e-49b8-8fd1-e1c535dd7e8c\") " pod="openstack/ssh-known-hosts-edpm-deployment-d2qht" Nov 25 12:12:22 crc kubenswrapper[4706]: I1125 12:12:22.812208 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ab590c42-c26e-49b8-8fd1-e1c535dd7e8c-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-d2qht\" (UID: \"ab590c42-c26e-49b8-8fd1-e1c535dd7e8c\") " pod="openstack/ssh-known-hosts-edpm-deployment-d2qht" Nov 25 12:12:22 crc kubenswrapper[4706]: I1125 12:12:22.814358 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab590c42-c26e-49b8-8fd1-e1c535dd7e8c-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-d2qht\" (UID: \"ab590c42-c26e-49b8-8fd1-e1c535dd7e8c\") " pod="openstack/ssh-known-hosts-edpm-deployment-d2qht" Nov 25 12:12:22 crc kubenswrapper[4706]: I1125 12:12:22.831288 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4jf9\" (UniqueName: \"kubernetes.io/projected/ab590c42-c26e-49b8-8fd1-e1c535dd7e8c-kube-api-access-m4jf9\") pod \"ssh-known-hosts-edpm-deployment-d2qht\" (UID: \"ab590c42-c26e-49b8-8fd1-e1c535dd7e8c\") " pod="openstack/ssh-known-hosts-edpm-deployment-d2qht" Nov 25 12:12:22 crc kubenswrapper[4706]: I1125 12:12:22.917594 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-d2qht" Nov 25 12:12:23 crc kubenswrapper[4706]: I1125 12:12:23.514702 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-d2qht"] Nov 25 12:12:23 crc kubenswrapper[4706]: I1125 12:12:23.518973 4706 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 12:12:24 crc kubenswrapper[4706]: I1125 12:12:24.497348 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-d2qht" event={"ID":"ab590c42-c26e-49b8-8fd1-e1c535dd7e8c","Type":"ContainerStarted","Data":"50778bf0efb448ee5d6b177168ad6dfe5e85ab7b90a2f2d8e4c043dd95a2188a"} Nov 25 12:12:24 crc kubenswrapper[4706]: I1125 12:12:24.497720 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-d2qht" event={"ID":"ab590c42-c26e-49b8-8fd1-e1c535dd7e8c","Type":"ContainerStarted","Data":"5df115a57dafc94eb851b3e7cd368b7956f4ac77ac0e100530f97bd8a442bc55"} Nov 25 12:12:24 crc kubenswrapper[4706]: I1125 12:12:24.516736 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-d2qht" podStartSLOduration=2.067654276 podStartE2EDuration="2.51671883s" podCreationTimestamp="2025-11-25 12:12:22 +0000 UTC" firstStartedPulling="2025-11-25 12:12:23.518759078 +0000 UTC m=+2152.433316459" lastFinishedPulling="2025-11-25 12:12:23.967823632 +0000 UTC m=+2152.882381013" observedRunningTime="2025-11-25 12:12:24.515321885 +0000 UTC m=+2153.429879316" watchObservedRunningTime="2025-11-25 12:12:24.51671883 +0000 UTC m=+2153.431276211" Nov 25 12:12:31 crc kubenswrapper[4706]: I1125 12:12:31.563886 4706 generic.go:334] "Generic (PLEG): container finished" podID="ab590c42-c26e-49b8-8fd1-e1c535dd7e8c" containerID="50778bf0efb448ee5d6b177168ad6dfe5e85ab7b90a2f2d8e4c043dd95a2188a" exitCode=0 Nov 25 12:12:31 crc kubenswrapper[4706]: I1125 12:12:31.563986 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-d2qht" event={"ID":"ab590c42-c26e-49b8-8fd1-e1c535dd7e8c","Type":"ContainerDied","Data":"50778bf0efb448ee5d6b177168ad6dfe5e85ab7b90a2f2d8e4c043dd95a2188a"} Nov 25 12:12:32 crc kubenswrapper[4706]: I1125 12:12:32.969874 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-d2qht" Nov 25 12:12:33 crc kubenswrapper[4706]: I1125 12:12:33.021525 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab590c42-c26e-49b8-8fd1-e1c535dd7e8c-ssh-key-openstack-edpm-ipam\") pod \"ab590c42-c26e-49b8-8fd1-e1c535dd7e8c\" (UID: \"ab590c42-c26e-49b8-8fd1-e1c535dd7e8c\") " Nov 25 12:12:33 crc kubenswrapper[4706]: I1125 12:12:33.021620 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ab590c42-c26e-49b8-8fd1-e1c535dd7e8c-inventory-0\") pod \"ab590c42-c26e-49b8-8fd1-e1c535dd7e8c\" (UID: \"ab590c42-c26e-49b8-8fd1-e1c535dd7e8c\") " Nov 25 12:12:33 crc kubenswrapper[4706]: I1125 12:12:33.021972 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4jf9\" (UniqueName: \"kubernetes.io/projected/ab590c42-c26e-49b8-8fd1-e1c535dd7e8c-kube-api-access-m4jf9\") pod \"ab590c42-c26e-49b8-8fd1-e1c535dd7e8c\" (UID: \"ab590c42-c26e-49b8-8fd1-e1c535dd7e8c\") " Nov 25 12:12:33 crc kubenswrapper[4706]: I1125 12:12:33.028535 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab590c42-c26e-49b8-8fd1-e1c535dd7e8c-kube-api-access-m4jf9" (OuterVolumeSpecName: "kube-api-access-m4jf9") pod "ab590c42-c26e-49b8-8fd1-e1c535dd7e8c" (UID: "ab590c42-c26e-49b8-8fd1-e1c535dd7e8c"). InnerVolumeSpecName "kube-api-access-m4jf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:12:33 crc kubenswrapper[4706]: I1125 12:12:33.059117 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab590c42-c26e-49b8-8fd1-e1c535dd7e8c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ab590c42-c26e-49b8-8fd1-e1c535dd7e8c" (UID: "ab590c42-c26e-49b8-8fd1-e1c535dd7e8c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:12:33 crc kubenswrapper[4706]: I1125 12:12:33.064475 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab590c42-c26e-49b8-8fd1-e1c535dd7e8c-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "ab590c42-c26e-49b8-8fd1-e1c535dd7e8c" (UID: "ab590c42-c26e-49b8-8fd1-e1c535dd7e8c"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:12:33 crc kubenswrapper[4706]: I1125 12:12:33.123986 4706 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ab590c42-c26e-49b8-8fd1-e1c535dd7e8c-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:12:33 crc kubenswrapper[4706]: I1125 12:12:33.124028 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4jf9\" (UniqueName: \"kubernetes.io/projected/ab590c42-c26e-49b8-8fd1-e1c535dd7e8c-kube-api-access-m4jf9\") on node \"crc\" DevicePath \"\"" Nov 25 12:12:33 crc kubenswrapper[4706]: I1125 12:12:33.124044 4706 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab590c42-c26e-49b8-8fd1-e1c535dd7e8c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 25 12:12:33 crc kubenswrapper[4706]: I1125 12:12:33.584689 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-d2qht" event={"ID":"ab590c42-c26e-49b8-8fd1-e1c535dd7e8c","Type":"ContainerDied","Data":"5df115a57dafc94eb851b3e7cd368b7956f4ac77ac0e100530f97bd8a442bc55"} Nov 25 12:12:33 crc kubenswrapper[4706]: I1125 12:12:33.584748 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-d2qht" Nov 25 12:12:33 crc kubenswrapper[4706]: I1125 12:12:33.584760 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5df115a57dafc94eb851b3e7cd368b7956f4ac77ac0e100530f97bd8a442bc55" Nov 25 12:12:33 crc kubenswrapper[4706]: I1125 12:12:33.677045 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-4j6mw"] Nov 25 12:12:33 crc kubenswrapper[4706]: E1125 12:12:33.677528 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab590c42-c26e-49b8-8fd1-e1c535dd7e8c" containerName="ssh-known-hosts-edpm-deployment" Nov 25 12:12:33 crc kubenswrapper[4706]: I1125 12:12:33.677552 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab590c42-c26e-49b8-8fd1-e1c535dd7e8c" containerName="ssh-known-hosts-edpm-deployment" Nov 25 12:12:33 crc kubenswrapper[4706]: I1125 12:12:33.677752 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab590c42-c26e-49b8-8fd1-e1c535dd7e8c" containerName="ssh-known-hosts-edpm-deployment" Nov 25 12:12:33 crc kubenswrapper[4706]: I1125 12:12:33.678588 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4j6mw" Nov 25 12:12:33 crc kubenswrapper[4706]: I1125 12:12:33.680953 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 12:12:33 crc kubenswrapper[4706]: I1125 12:12:33.681064 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:12:33 crc kubenswrapper[4706]: I1125 12:12:33.681165 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 12:12:33 crc kubenswrapper[4706]: I1125 12:12:33.681441 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r8qqp" Nov 25 12:12:33 crc kubenswrapper[4706]: I1125 12:12:33.701538 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-4j6mw"] Nov 25 12:12:33 crc kubenswrapper[4706]: I1125 12:12:33.839355 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb2ks\" (UniqueName: \"kubernetes.io/projected/2976f69c-c134-429f-98c4-f7d54d9245b1-kube-api-access-qb2ks\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4j6mw\" (UID: \"2976f69c-c134-429f-98c4-f7d54d9245b1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4j6mw" Nov 25 12:12:33 crc kubenswrapper[4706]: I1125 12:12:33.839450 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2976f69c-c134-429f-98c4-f7d54d9245b1-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4j6mw\" (UID: \"2976f69c-c134-429f-98c4-f7d54d9245b1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4j6mw" Nov 25 12:12:33 crc kubenswrapper[4706]: I1125 12:12:33.839704 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2976f69c-c134-429f-98c4-f7d54d9245b1-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4j6mw\" (UID: \"2976f69c-c134-429f-98c4-f7d54d9245b1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4j6mw" Nov 25 12:12:33 crc kubenswrapper[4706]: I1125 12:12:33.941023 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2976f69c-c134-429f-98c4-f7d54d9245b1-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4j6mw\" (UID: \"2976f69c-c134-429f-98c4-f7d54d9245b1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4j6mw" Nov 25 12:12:33 crc kubenswrapper[4706]: I1125 12:12:33.941135 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb2ks\" (UniqueName: \"kubernetes.io/projected/2976f69c-c134-429f-98c4-f7d54d9245b1-kube-api-access-qb2ks\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4j6mw\" (UID: \"2976f69c-c134-429f-98c4-f7d54d9245b1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4j6mw" Nov 25 12:12:33 crc kubenswrapper[4706]: I1125 12:12:33.941184 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2976f69c-c134-429f-98c4-f7d54d9245b1-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4j6mw\" (UID: \"2976f69c-c134-429f-98c4-f7d54d9245b1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4j6mw" Nov 25 12:12:33 crc kubenswrapper[4706]: I1125 12:12:33.946266 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2976f69c-c134-429f-98c4-f7d54d9245b1-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4j6mw\" (UID: \"2976f69c-c134-429f-98c4-f7d54d9245b1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4j6mw" Nov 25 12:12:33 crc kubenswrapper[4706]: I1125 12:12:33.946910 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2976f69c-c134-429f-98c4-f7d54d9245b1-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4j6mw\" (UID: \"2976f69c-c134-429f-98c4-f7d54d9245b1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4j6mw" Nov 25 12:12:33 crc kubenswrapper[4706]: I1125 12:12:33.962349 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb2ks\" (UniqueName: \"kubernetes.io/projected/2976f69c-c134-429f-98c4-f7d54d9245b1-kube-api-access-qb2ks\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4j6mw\" (UID: \"2976f69c-c134-429f-98c4-f7d54d9245b1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4j6mw" Nov 25 12:12:34 crc kubenswrapper[4706]: I1125 12:12:34.001531 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4j6mw" Nov 25 12:12:34 crc kubenswrapper[4706]: I1125 12:12:34.526422 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-4j6mw"] Nov 25 12:12:34 crc kubenswrapper[4706]: I1125 12:12:34.594476 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4j6mw" event={"ID":"2976f69c-c134-429f-98c4-f7d54d9245b1","Type":"ContainerStarted","Data":"73dca9f7858db2388ff0258d677b9197a59656bb13fa5f4e40952cd5dfadc896"} Nov 25 12:12:35 crc kubenswrapper[4706]: I1125 12:12:35.604007 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4j6mw" event={"ID":"2976f69c-c134-429f-98c4-f7d54d9245b1","Type":"ContainerStarted","Data":"58a0ba16935e4372061f23bceda514c68cc2422d392af87204915e0fccac573c"} Nov 25 12:12:35 crc kubenswrapper[4706]: I1125 12:12:35.630998 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4j6mw" podStartSLOduration=2.188539407 podStartE2EDuration="2.630979801s" podCreationTimestamp="2025-11-25 12:12:33 +0000 UTC" firstStartedPulling="2025-11-25 12:12:34.533062259 +0000 UTC m=+2163.447619640" lastFinishedPulling="2025-11-25 12:12:34.975502653 +0000 UTC m=+2163.890060034" observedRunningTime="2025-11-25 12:12:35.623639683 +0000 UTC m=+2164.538197064" watchObservedRunningTime="2025-11-25 12:12:35.630979801 +0000 UTC m=+2164.545537172" Nov 25 12:12:43 crc kubenswrapper[4706]: I1125 12:12:43.698412 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4j6mw" event={"ID":"2976f69c-c134-429f-98c4-f7d54d9245b1","Type":"ContainerDied","Data":"58a0ba16935e4372061f23bceda514c68cc2422d392af87204915e0fccac573c"} Nov 25 12:12:43 crc kubenswrapper[4706]: I1125 12:12:43.698482 4706 generic.go:334] "Generic (PLEG): container finished" podID="2976f69c-c134-429f-98c4-f7d54d9245b1" containerID="58a0ba16935e4372061f23bceda514c68cc2422d392af87204915e0fccac573c" exitCode=0 Nov 25 12:12:44 crc kubenswrapper[4706]: I1125 12:12:44.509756 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rw597"] Nov 25 12:12:44 crc kubenswrapper[4706]: I1125 12:12:44.515552 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rw597" Nov 25 12:12:44 crc kubenswrapper[4706]: I1125 12:12:44.524504 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rw597"] Nov 25 12:12:44 crc kubenswrapper[4706]: I1125 12:12:44.642765 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d-utilities\") pod \"redhat-marketplace-rw597\" (UID: \"30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d\") " pod="openshift-marketplace/redhat-marketplace-rw597" Nov 25 12:12:44 crc kubenswrapper[4706]: I1125 12:12:44.642986 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pgvg\" (UniqueName: \"kubernetes.io/projected/30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d-kube-api-access-5pgvg\") pod \"redhat-marketplace-rw597\" (UID: \"30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d\") " pod="openshift-marketplace/redhat-marketplace-rw597" Nov 25 12:12:44 crc kubenswrapper[4706]: I1125 12:12:44.643027 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d-catalog-content\") pod \"redhat-marketplace-rw597\" (UID: \"30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d\") " pod="openshift-marketplace/redhat-marketplace-rw597" Nov 25 12:12:44 crc kubenswrapper[4706]: I1125 12:12:44.745238 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d-utilities\") pod \"redhat-marketplace-rw597\" (UID: \"30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d\") " pod="openshift-marketplace/redhat-marketplace-rw597" Nov 25 12:12:44 crc kubenswrapper[4706]: I1125 12:12:44.745449 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pgvg\" (UniqueName: \"kubernetes.io/projected/30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d-kube-api-access-5pgvg\") pod \"redhat-marketplace-rw597\" (UID: \"30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d\") " pod="openshift-marketplace/redhat-marketplace-rw597" Nov 25 12:12:44 crc kubenswrapper[4706]: I1125 12:12:44.745478 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d-catalog-content\") pod \"redhat-marketplace-rw597\" (UID: \"30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d\") " pod="openshift-marketplace/redhat-marketplace-rw597" Nov 25 12:12:44 crc kubenswrapper[4706]: I1125 12:12:44.746406 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d-utilities\") pod \"redhat-marketplace-rw597\" (UID: \"30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d\") " pod="openshift-marketplace/redhat-marketplace-rw597" Nov 25 12:12:44 crc kubenswrapper[4706]: I1125 12:12:44.746428 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d-catalog-content\") pod \"redhat-marketplace-rw597\" (UID: \"30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d\") " pod="openshift-marketplace/redhat-marketplace-rw597" Nov 25 12:12:44 crc kubenswrapper[4706]: I1125 12:12:44.776267 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pgvg\" (UniqueName: \"kubernetes.io/projected/30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d-kube-api-access-5pgvg\") pod \"redhat-marketplace-rw597\" (UID: \"30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d\") " pod="openshift-marketplace/redhat-marketplace-rw597" Nov 25 12:12:44 crc kubenswrapper[4706]: I1125 12:12:44.835204 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rw597" Nov 25 12:12:45 crc kubenswrapper[4706]: I1125 12:12:45.172216 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4j6mw" Nov 25 12:12:45 crc kubenswrapper[4706]: I1125 12:12:45.262640 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2976f69c-c134-429f-98c4-f7d54d9245b1-inventory\") pod \"2976f69c-c134-429f-98c4-f7d54d9245b1\" (UID: \"2976f69c-c134-429f-98c4-f7d54d9245b1\") " Nov 25 12:12:45 crc kubenswrapper[4706]: I1125 12:12:45.262806 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb2ks\" (UniqueName: \"kubernetes.io/projected/2976f69c-c134-429f-98c4-f7d54d9245b1-kube-api-access-qb2ks\") pod \"2976f69c-c134-429f-98c4-f7d54d9245b1\" (UID: \"2976f69c-c134-429f-98c4-f7d54d9245b1\") " Nov 25 12:12:45 crc kubenswrapper[4706]: I1125 12:12:45.262988 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2976f69c-c134-429f-98c4-f7d54d9245b1-ssh-key\") pod \"2976f69c-c134-429f-98c4-f7d54d9245b1\" (UID: \"2976f69c-c134-429f-98c4-f7d54d9245b1\") " Nov 25 12:12:45 crc kubenswrapper[4706]: I1125 12:12:45.269817 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2976f69c-c134-429f-98c4-f7d54d9245b1-kube-api-access-qb2ks" (OuterVolumeSpecName: "kube-api-access-qb2ks") pod "2976f69c-c134-429f-98c4-f7d54d9245b1" (UID: "2976f69c-c134-429f-98c4-f7d54d9245b1"). InnerVolumeSpecName "kube-api-access-qb2ks". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:12:45 crc kubenswrapper[4706]: I1125 12:12:45.293149 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2976f69c-c134-429f-98c4-f7d54d9245b1-inventory" (OuterVolumeSpecName: "inventory") pod "2976f69c-c134-429f-98c4-f7d54d9245b1" (UID: "2976f69c-c134-429f-98c4-f7d54d9245b1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:12:45 crc kubenswrapper[4706]: I1125 12:12:45.296048 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2976f69c-c134-429f-98c4-f7d54d9245b1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2976f69c-c134-429f-98c4-f7d54d9245b1" (UID: "2976f69c-c134-429f-98c4-f7d54d9245b1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:12:45 crc kubenswrapper[4706]: I1125 12:12:45.365043 4706 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2976f69c-c134-429f-98c4-f7d54d9245b1-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 12:12:45 crc kubenswrapper[4706]: I1125 12:12:45.365074 4706 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2976f69c-c134-429f-98c4-f7d54d9245b1-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 12:12:45 crc kubenswrapper[4706]: I1125 12:12:45.365084 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qb2ks\" (UniqueName: \"kubernetes.io/projected/2976f69c-c134-429f-98c4-f7d54d9245b1-kube-api-access-qb2ks\") on node \"crc\" DevicePath \"\"" Nov 25 12:12:45 crc kubenswrapper[4706]: I1125 12:12:45.381565 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rw597"] Nov 25 12:12:45 crc kubenswrapper[4706]: I1125 12:12:45.717750 4706 generic.go:334] "Generic (PLEG): container finished" podID="30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d" containerID="0ec50854186b9d6e5f7de836272778a0cae63b1494ae4d2391ee3275066c35e0" exitCode=0 Nov 25 12:12:45 crc kubenswrapper[4706]: I1125 12:12:45.718808 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rw597" event={"ID":"30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d","Type":"ContainerDied","Data":"0ec50854186b9d6e5f7de836272778a0cae63b1494ae4d2391ee3275066c35e0"} Nov 25 12:12:45 crc kubenswrapper[4706]: I1125 12:12:45.718857 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rw597" event={"ID":"30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d","Type":"ContainerStarted","Data":"52717bcf8f5bb2fdf4b302fb3f4cc1153b8074cb159f669fc9a75c631d517a3a"} Nov 25 12:12:45 crc kubenswrapper[4706]: I1125 12:12:45.721567 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4j6mw" event={"ID":"2976f69c-c134-429f-98c4-f7d54d9245b1","Type":"ContainerDied","Data":"73dca9f7858db2388ff0258d677b9197a59656bb13fa5f4e40952cd5dfadc896"} Nov 25 12:12:45 crc kubenswrapper[4706]: I1125 12:12:45.721587 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73dca9f7858db2388ff0258d677b9197a59656bb13fa5f4e40952cd5dfadc896" Nov 25 12:12:45 crc kubenswrapper[4706]: I1125 12:12:45.721627 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4j6mw" Nov 25 12:12:45 crc kubenswrapper[4706]: I1125 12:12:45.785342 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-29gdm"] Nov 25 12:12:45 crc kubenswrapper[4706]: E1125 12:12:45.785813 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2976f69c-c134-429f-98c4-f7d54d9245b1" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 12:12:45 crc kubenswrapper[4706]: I1125 12:12:45.785827 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="2976f69c-c134-429f-98c4-f7d54d9245b1" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 12:12:45 crc kubenswrapper[4706]: I1125 12:12:45.786018 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="2976f69c-c134-429f-98c4-f7d54d9245b1" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 12:12:45 crc kubenswrapper[4706]: I1125 12:12:45.786781 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-29gdm" Nov 25 12:12:45 crc kubenswrapper[4706]: I1125 12:12:45.791557 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 12:12:45 crc kubenswrapper[4706]: I1125 12:12:45.792051 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:12:45 crc kubenswrapper[4706]: I1125 12:12:45.792162 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r8qqp" Nov 25 12:12:45 crc kubenswrapper[4706]: I1125 12:12:45.792609 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 12:12:45 crc kubenswrapper[4706]: I1125 12:12:45.797743 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-29gdm"] Nov 25 12:12:45 crc kubenswrapper[4706]: I1125 12:12:45.873557 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s645\" (UniqueName: \"kubernetes.io/projected/9357f592-809a-450b-b052-fbb438c6d98f-kube-api-access-9s645\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-29gdm\" (UID: \"9357f592-809a-450b-b052-fbb438c6d98f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-29gdm" Nov 25 12:12:45 crc kubenswrapper[4706]: I1125 12:12:45.873937 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9357f592-809a-450b-b052-fbb438c6d98f-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-29gdm\" (UID: \"9357f592-809a-450b-b052-fbb438c6d98f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-29gdm" Nov 25 12:12:45 crc kubenswrapper[4706]: I1125 12:12:45.874102 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9357f592-809a-450b-b052-fbb438c6d98f-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-29gdm\" (UID: \"9357f592-809a-450b-b052-fbb438c6d98f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-29gdm" Nov 25 12:12:45 crc kubenswrapper[4706]: I1125 12:12:45.976667 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9357f592-809a-450b-b052-fbb438c6d98f-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-29gdm\" (UID: \"9357f592-809a-450b-b052-fbb438c6d98f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-29gdm" Nov 25 12:12:45 crc kubenswrapper[4706]: I1125 12:12:45.976836 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s645\" (UniqueName: \"kubernetes.io/projected/9357f592-809a-450b-b052-fbb438c6d98f-kube-api-access-9s645\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-29gdm\" (UID: \"9357f592-809a-450b-b052-fbb438c6d98f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-29gdm" Nov 25 12:12:45 crc kubenswrapper[4706]: I1125 12:12:45.976938 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9357f592-809a-450b-b052-fbb438c6d98f-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-29gdm\" (UID: \"9357f592-809a-450b-b052-fbb438c6d98f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-29gdm" Nov 25 12:12:45 crc kubenswrapper[4706]: I1125 12:12:45.982279 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9357f592-809a-450b-b052-fbb438c6d98f-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-29gdm\" (UID: \"9357f592-809a-450b-b052-fbb438c6d98f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-29gdm" Nov 25 12:12:45 crc kubenswrapper[4706]: I1125 12:12:45.983092 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9357f592-809a-450b-b052-fbb438c6d98f-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-29gdm\" (UID: \"9357f592-809a-450b-b052-fbb438c6d98f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-29gdm" Nov 25 12:12:46 crc kubenswrapper[4706]: I1125 12:12:46.000314 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s645\" (UniqueName: \"kubernetes.io/projected/9357f592-809a-450b-b052-fbb438c6d98f-kube-api-access-9s645\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-29gdm\" (UID: \"9357f592-809a-450b-b052-fbb438c6d98f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-29gdm" Nov 25 12:12:46 crc kubenswrapper[4706]: I1125 12:12:46.140618 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-29gdm" Nov 25 12:12:46 crc kubenswrapper[4706]: I1125 12:12:46.632239 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-29gdm"] Nov 25 12:12:46 crc kubenswrapper[4706]: W1125 12:12:46.696695 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9357f592_809a_450b_b052_fbb438c6d98f.slice/crio-1de8f4c063eed1cde342094b77211c607f89c2adf49ed8417463bde07f766b73 WatchSource:0}: Error finding container 1de8f4c063eed1cde342094b77211c607f89c2adf49ed8417463bde07f766b73: Status 404 returned error can't find the container with id 1de8f4c063eed1cde342094b77211c607f89c2adf49ed8417463bde07f766b73 Nov 25 12:12:46 crc kubenswrapper[4706]: I1125 12:12:46.731390 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-29gdm" event={"ID":"9357f592-809a-450b-b052-fbb438c6d98f","Type":"ContainerStarted","Data":"1de8f4c063eed1cde342094b77211c607f89c2adf49ed8417463bde07f766b73"} Nov 25 12:12:46 crc kubenswrapper[4706]: I1125 12:12:46.734016 4706 generic.go:334] "Generic (PLEG): container finished" podID="30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d" containerID="370e5915d6a9e54784e66198cb088192d7ffff846835d9d7133c91431eb9c17c" exitCode=0 Nov 25 12:12:46 crc kubenswrapper[4706]: I1125 12:12:46.734101 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rw597" event={"ID":"30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d","Type":"ContainerDied","Data":"370e5915d6a9e54784e66198cb088192d7ffff846835d9d7133c91431eb9c17c"} Nov 25 12:12:47 crc kubenswrapper[4706]: I1125 12:12:47.745888 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-29gdm" event={"ID":"9357f592-809a-450b-b052-fbb438c6d98f","Type":"ContainerStarted","Data":"c20afc3b32ec8b6a9deb247f1fa0818c20b6f267c17d647ecc52348b13110ce3"} Nov 25 12:12:47 crc kubenswrapper[4706]: I1125 12:12:47.749339 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rw597" event={"ID":"30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d","Type":"ContainerStarted","Data":"fb56fa38b605b3b11321209caedb2ab6b17ed91c8c03603c8f485455b54168bb"} Nov 25 12:12:47 crc kubenswrapper[4706]: I1125 12:12:47.765943 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-29gdm" podStartSLOduration=2.232079498 podStartE2EDuration="2.765922024s" podCreationTimestamp="2025-11-25 12:12:45 +0000 UTC" firstStartedPulling="2025-11-25 12:12:46.698272383 +0000 UTC m=+2175.612829764" lastFinishedPulling="2025-11-25 12:12:47.232114909 +0000 UTC m=+2176.146672290" observedRunningTime="2025-11-25 12:12:47.758894734 +0000 UTC m=+2176.673452125" watchObservedRunningTime="2025-11-25 12:12:47.765922024 +0000 UTC m=+2176.680479405" Nov 25 12:12:47 crc kubenswrapper[4706]: I1125 12:12:47.789894 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rw597" podStartSLOduration=2.386663567 podStartE2EDuration="3.789875685s" podCreationTimestamp="2025-11-25 12:12:44 +0000 UTC" firstStartedPulling="2025-11-25 12:12:45.719533031 +0000 UTC m=+2174.634090412" lastFinishedPulling="2025-11-25 12:12:47.122745149 +0000 UTC m=+2176.037302530" observedRunningTime="2025-11-25 12:12:47.783720588 +0000 UTC m=+2176.698277969" watchObservedRunningTime="2025-11-25 12:12:47.789875685 +0000 UTC m=+2176.704433066" Nov 25 12:12:54 crc kubenswrapper[4706]: I1125 12:12:54.835624 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rw597" Nov 25 12:12:54 crc kubenswrapper[4706]: I1125 12:12:54.836222 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rw597" Nov 25 12:12:54 crc kubenswrapper[4706]: I1125 12:12:54.881395 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rw597" Nov 25 12:12:55 crc kubenswrapper[4706]: I1125 12:12:55.877652 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rw597" Nov 25 12:12:55 crc kubenswrapper[4706]: I1125 12:12:55.941596 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rw597"] Nov 25 12:12:57 crc kubenswrapper[4706]: I1125 12:12:57.546119 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dv8c7"] Nov 25 12:12:57 crc kubenswrapper[4706]: I1125 12:12:57.549364 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dv8c7" Nov 25 12:12:57 crc kubenswrapper[4706]: I1125 12:12:57.565794 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dv8c7"] Nov 25 12:12:57 crc kubenswrapper[4706]: I1125 12:12:57.718724 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa689175-4255-45b5-8720-3b774731c07c-utilities\") pod \"certified-operators-dv8c7\" (UID: \"fa689175-4255-45b5-8720-3b774731c07c\") " pod="openshift-marketplace/certified-operators-dv8c7" Nov 25 12:12:57 crc kubenswrapper[4706]: I1125 12:12:57.718843 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa689175-4255-45b5-8720-3b774731c07c-catalog-content\") pod \"certified-operators-dv8c7\" (UID: \"fa689175-4255-45b5-8720-3b774731c07c\") " pod="openshift-marketplace/certified-operators-dv8c7" Nov 25 12:12:57 crc kubenswrapper[4706]: I1125 12:12:57.718928 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qljrm\" (UniqueName: \"kubernetes.io/projected/fa689175-4255-45b5-8720-3b774731c07c-kube-api-access-qljrm\") pod \"certified-operators-dv8c7\" (UID: \"fa689175-4255-45b5-8720-3b774731c07c\") " pod="openshift-marketplace/certified-operators-dv8c7" Nov 25 12:12:57 crc kubenswrapper[4706]: I1125 12:12:57.820506 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa689175-4255-45b5-8720-3b774731c07c-utilities\") pod \"certified-operators-dv8c7\" (UID: \"fa689175-4255-45b5-8720-3b774731c07c\") " pod="openshift-marketplace/certified-operators-dv8c7" Nov 25 12:12:57 crc kubenswrapper[4706]: I1125 12:12:57.820653 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa689175-4255-45b5-8720-3b774731c07c-catalog-content\") pod \"certified-operators-dv8c7\" (UID: \"fa689175-4255-45b5-8720-3b774731c07c\") " pod="openshift-marketplace/certified-operators-dv8c7" Nov 25 12:12:57 crc kubenswrapper[4706]: I1125 12:12:57.820745 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qljrm\" (UniqueName: \"kubernetes.io/projected/fa689175-4255-45b5-8720-3b774731c07c-kube-api-access-qljrm\") pod \"certified-operators-dv8c7\" (UID: \"fa689175-4255-45b5-8720-3b774731c07c\") " pod="openshift-marketplace/certified-operators-dv8c7" Nov 25 12:12:57 crc kubenswrapper[4706]: I1125 12:12:57.821179 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa689175-4255-45b5-8720-3b774731c07c-utilities\") pod \"certified-operators-dv8c7\" (UID: \"fa689175-4255-45b5-8720-3b774731c07c\") " pod="openshift-marketplace/certified-operators-dv8c7" Nov 25 12:12:57 crc kubenswrapper[4706]: I1125 12:12:57.821410 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa689175-4255-45b5-8720-3b774731c07c-catalog-content\") pod \"certified-operators-dv8c7\" (UID: \"fa689175-4255-45b5-8720-3b774731c07c\") " pod="openshift-marketplace/certified-operators-dv8c7" Nov 25 12:12:57 crc kubenswrapper[4706]: I1125 12:12:57.845168 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qljrm\" (UniqueName: \"kubernetes.io/projected/fa689175-4255-45b5-8720-3b774731c07c-kube-api-access-qljrm\") pod \"certified-operators-dv8c7\" (UID: \"fa689175-4255-45b5-8720-3b774731c07c\") " pod="openshift-marketplace/certified-operators-dv8c7" Nov 25 12:12:57 crc kubenswrapper[4706]: I1125 12:12:57.852272 4706 generic.go:334] "Generic (PLEG): container finished" podID="9357f592-809a-450b-b052-fbb438c6d98f" containerID="c20afc3b32ec8b6a9deb247f1fa0818c20b6f267c17d647ecc52348b13110ce3" exitCode=0 Nov 25 12:12:57 crc kubenswrapper[4706]: I1125 12:12:57.852330 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-29gdm" event={"ID":"9357f592-809a-450b-b052-fbb438c6d98f","Type":"ContainerDied","Data":"c20afc3b32ec8b6a9deb247f1fa0818c20b6f267c17d647ecc52348b13110ce3"} Nov 25 12:12:57 crc kubenswrapper[4706]: I1125 12:12:57.852582 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rw597" podUID="30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d" containerName="registry-server" containerID="cri-o://fb56fa38b605b3b11321209caedb2ab6b17ed91c8c03603c8f485455b54168bb" gracePeriod=2 Nov 25 12:12:57 crc kubenswrapper[4706]: I1125 12:12:57.907602 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dv8c7" Nov 25 12:12:58 crc kubenswrapper[4706]: I1125 12:12:58.695984 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dv8c7"] Nov 25 12:12:58 crc kubenswrapper[4706]: I1125 12:12:58.723794 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rw597" Nov 25 12:12:58 crc kubenswrapper[4706]: I1125 12:12:58.866089 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dv8c7" event={"ID":"fa689175-4255-45b5-8720-3b774731c07c","Type":"ContainerStarted","Data":"06f3057afbaacca9f9d272a5ad1854e801a6609e23b764d9353207964e5b6c50"} Nov 25 12:12:58 crc kubenswrapper[4706]: I1125 12:12:58.872675 4706 generic.go:334] "Generic (PLEG): container finished" podID="30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d" containerID="fb56fa38b605b3b11321209caedb2ab6b17ed91c8c03603c8f485455b54168bb" exitCode=0 Nov 25 12:12:58 crc kubenswrapper[4706]: I1125 12:12:58.872753 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rw597" Nov 25 12:12:58 crc kubenswrapper[4706]: I1125 12:12:58.872800 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rw597" event={"ID":"30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d","Type":"ContainerDied","Data":"fb56fa38b605b3b11321209caedb2ab6b17ed91c8c03603c8f485455b54168bb"} Nov 25 12:12:58 crc kubenswrapper[4706]: I1125 12:12:58.872862 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rw597" event={"ID":"30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d","Type":"ContainerDied","Data":"52717bcf8f5bb2fdf4b302fb3f4cc1153b8074cb159f669fc9a75c631d517a3a"} Nov 25 12:12:58 crc kubenswrapper[4706]: I1125 12:12:58.872884 4706 scope.go:117] "RemoveContainer" containerID="fb56fa38b605b3b11321209caedb2ab6b17ed91c8c03603c8f485455b54168bb" Nov 25 12:12:58 crc kubenswrapper[4706]: I1125 12:12:58.896581 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d-catalog-content\") pod \"30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d\" (UID: \"30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d\") " Nov 25 12:12:58 crc kubenswrapper[4706]: I1125 12:12:58.896678 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pgvg\" (UniqueName: \"kubernetes.io/projected/30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d-kube-api-access-5pgvg\") pod \"30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d\" (UID: \"30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d\") " Nov 25 12:12:58 crc kubenswrapper[4706]: I1125 12:12:58.896731 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d-utilities\") pod \"30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d\" (UID: \"30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d\") " Nov 25 12:12:58 crc kubenswrapper[4706]: I1125 12:12:58.897928 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d-utilities" (OuterVolumeSpecName: "utilities") pod "30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d" (UID: "30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:12:58 crc kubenswrapper[4706]: I1125 12:12:58.910039 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d-kube-api-access-5pgvg" (OuterVolumeSpecName: "kube-api-access-5pgvg") pod "30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d" (UID: "30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d"). InnerVolumeSpecName "kube-api-access-5pgvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:12:58 crc kubenswrapper[4706]: I1125 12:12:58.916939 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d" (UID: "30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:12:58 crc kubenswrapper[4706]: I1125 12:12:58.923731 4706 scope.go:117] "RemoveContainer" containerID="370e5915d6a9e54784e66198cb088192d7ffff846835d9d7133c91431eb9c17c" Nov 25 12:12:58 crc kubenswrapper[4706]: I1125 12:12:58.973812 4706 scope.go:117] "RemoveContainer" containerID="0ec50854186b9d6e5f7de836272778a0cae63b1494ae4d2391ee3275066c35e0" Nov 25 12:12:58 crc kubenswrapper[4706]: I1125 12:12:58.999156 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:12:58 crc kubenswrapper[4706]: I1125 12:12:58.999662 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pgvg\" (UniqueName: \"kubernetes.io/projected/30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d-kube-api-access-5pgvg\") on node \"crc\" DevicePath \"\"" Nov 25 12:12:58 crc kubenswrapper[4706]: I1125 12:12:58.999696 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.096619 4706 scope.go:117] "RemoveContainer" containerID="fb56fa38b605b3b11321209caedb2ab6b17ed91c8c03603c8f485455b54168bb" Nov 25 12:12:59 crc kubenswrapper[4706]: E1125 12:12:59.097136 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb56fa38b605b3b11321209caedb2ab6b17ed91c8c03603c8f485455b54168bb\": container with ID starting with fb56fa38b605b3b11321209caedb2ab6b17ed91c8c03603c8f485455b54168bb not found: ID does not exist" containerID="fb56fa38b605b3b11321209caedb2ab6b17ed91c8c03603c8f485455b54168bb" Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.097262 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb56fa38b605b3b11321209caedb2ab6b17ed91c8c03603c8f485455b54168bb"} err="failed to get container status \"fb56fa38b605b3b11321209caedb2ab6b17ed91c8c03603c8f485455b54168bb\": rpc error: code = NotFound desc = could not find container \"fb56fa38b605b3b11321209caedb2ab6b17ed91c8c03603c8f485455b54168bb\": container with ID starting with fb56fa38b605b3b11321209caedb2ab6b17ed91c8c03603c8f485455b54168bb not found: ID does not exist" Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.097284 4706 scope.go:117] "RemoveContainer" containerID="370e5915d6a9e54784e66198cb088192d7ffff846835d9d7133c91431eb9c17c" Nov 25 12:12:59 crc kubenswrapper[4706]: E1125 12:12:59.097630 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"370e5915d6a9e54784e66198cb088192d7ffff846835d9d7133c91431eb9c17c\": container with ID starting with 370e5915d6a9e54784e66198cb088192d7ffff846835d9d7133c91431eb9c17c not found: ID does not exist" containerID="370e5915d6a9e54784e66198cb088192d7ffff846835d9d7133c91431eb9c17c" Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.097650 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"370e5915d6a9e54784e66198cb088192d7ffff846835d9d7133c91431eb9c17c"} err="failed to get container status \"370e5915d6a9e54784e66198cb088192d7ffff846835d9d7133c91431eb9c17c\": rpc error: code = NotFound desc = could not find container \"370e5915d6a9e54784e66198cb088192d7ffff846835d9d7133c91431eb9c17c\": container with ID starting with 370e5915d6a9e54784e66198cb088192d7ffff846835d9d7133c91431eb9c17c not found: ID does not exist" Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.097666 4706 scope.go:117] "RemoveContainer" containerID="0ec50854186b9d6e5f7de836272778a0cae63b1494ae4d2391ee3275066c35e0" Nov 25 12:12:59 crc kubenswrapper[4706]: E1125 12:12:59.097864 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ec50854186b9d6e5f7de836272778a0cae63b1494ae4d2391ee3275066c35e0\": container with ID starting with 0ec50854186b9d6e5f7de836272778a0cae63b1494ae4d2391ee3275066c35e0 not found: ID does not exist" containerID="0ec50854186b9d6e5f7de836272778a0cae63b1494ae4d2391ee3275066c35e0" Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.097882 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ec50854186b9d6e5f7de836272778a0cae63b1494ae4d2391ee3275066c35e0"} err="failed to get container status \"0ec50854186b9d6e5f7de836272778a0cae63b1494ae4d2391ee3275066c35e0\": rpc error: code = NotFound desc = could not find container \"0ec50854186b9d6e5f7de836272778a0cae63b1494ae4d2391ee3275066c35e0\": container with ID starting with 0ec50854186b9d6e5f7de836272778a0cae63b1494ae4d2391ee3275066c35e0 not found: ID does not exist" Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.217638 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rw597"] Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.233399 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rw597"] Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.272509 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-29gdm" Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.413085 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9357f592-809a-450b-b052-fbb438c6d98f-inventory\") pod \"9357f592-809a-450b-b052-fbb438c6d98f\" (UID: \"9357f592-809a-450b-b052-fbb438c6d98f\") " Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.413246 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9s645\" (UniqueName: \"kubernetes.io/projected/9357f592-809a-450b-b052-fbb438c6d98f-kube-api-access-9s645\") pod \"9357f592-809a-450b-b052-fbb438c6d98f\" (UID: \"9357f592-809a-450b-b052-fbb438c6d98f\") " Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.413361 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9357f592-809a-450b-b052-fbb438c6d98f-ssh-key\") pod \"9357f592-809a-450b-b052-fbb438c6d98f\" (UID: \"9357f592-809a-450b-b052-fbb438c6d98f\") " Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.417770 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9357f592-809a-450b-b052-fbb438c6d98f-kube-api-access-9s645" (OuterVolumeSpecName: "kube-api-access-9s645") pod "9357f592-809a-450b-b052-fbb438c6d98f" (UID: "9357f592-809a-450b-b052-fbb438c6d98f"). InnerVolumeSpecName "kube-api-access-9s645". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.440747 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9357f592-809a-450b-b052-fbb438c6d98f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9357f592-809a-450b-b052-fbb438c6d98f" (UID: "9357f592-809a-450b-b052-fbb438c6d98f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.442183 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9357f592-809a-450b-b052-fbb438c6d98f-inventory" (OuterVolumeSpecName: "inventory") pod "9357f592-809a-450b-b052-fbb438c6d98f" (UID: "9357f592-809a-450b-b052-fbb438c6d98f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.515819 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9s645\" (UniqueName: \"kubernetes.io/projected/9357f592-809a-450b-b052-fbb438c6d98f-kube-api-access-9s645\") on node \"crc\" DevicePath \"\"" Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.516130 4706 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9357f592-809a-450b-b052-fbb438c6d98f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.516145 4706 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9357f592-809a-450b-b052-fbb438c6d98f-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.887701 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-29gdm" event={"ID":"9357f592-809a-450b-b052-fbb438c6d98f","Type":"ContainerDied","Data":"1de8f4c063eed1cde342094b77211c607f89c2adf49ed8417463bde07f766b73"} Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.887753 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1de8f4c063eed1cde342094b77211c607f89c2adf49ed8417463bde07f766b73" Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.887776 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-29gdm" Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.889921 4706 generic.go:334] "Generic (PLEG): container finished" podID="fa689175-4255-45b5-8720-3b774731c07c" containerID="85599b8a470843a1e10443ee4bdd17c338d69c8383b52a9e65ea0700654eb5e9" exitCode=0 Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.889957 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dv8c7" event={"ID":"fa689175-4255-45b5-8720-3b774731c07c","Type":"ContainerDied","Data":"85599b8a470843a1e10443ee4bdd17c338d69c8383b52a9e65ea0700654eb5e9"} Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.934164 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d" path="/var/lib/kubelet/pods/30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d/volumes" Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.969323 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj"] Nov 25 12:12:59 crc kubenswrapper[4706]: E1125 12:12:59.969720 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d" containerName="extract-utilities" Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.969737 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d" containerName="extract-utilities" Nov 25 12:12:59 crc kubenswrapper[4706]: E1125 12:12:59.969754 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9357f592-809a-450b-b052-fbb438c6d98f" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.969764 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="9357f592-809a-450b-b052-fbb438c6d98f" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 12:12:59 crc kubenswrapper[4706]: E1125 12:12:59.969782 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d" containerName="registry-server" Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.969788 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d" containerName="registry-server" Nov 25 12:12:59 crc kubenswrapper[4706]: E1125 12:12:59.969801 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d" containerName="extract-content" Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.969807 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d" containerName="extract-content" Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.969991 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="9357f592-809a-450b-b052-fbb438c6d98f" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.970014 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="30be0ca4-4dbb-46c5-8b2a-cbd7f8f2621d" containerName="registry-server" Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.970742 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.973134 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.974153 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.974329 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.974654 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.974894 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.975066 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r8qqp" Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.975283 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.979977 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 25 12:12:59 crc kubenswrapper[4706]: I1125 12:12:59.991947 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj"] Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.125839 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.125914 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.125950 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.125982 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.126096 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.126202 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.126230 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kd5w\" (UniqueName: \"kubernetes.io/projected/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-kube-api-access-8kd5w\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.126386 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.126433 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.126469 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.126542 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.126669 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.126738 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.126818 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.229063 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.229154 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.229186 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.229224 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.229263 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.229341 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.229381 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.229412 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.229440 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.229478 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.229503 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kd5w\" (UniqueName: \"kubernetes.io/projected/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-kube-api-access-8kd5w\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.229568 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.229599 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.229628 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.235100 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.235211 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.235673 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.236106 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.236448 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.237017 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.237083 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.237116 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.237643 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.239063 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.239115 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.240836 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.240856 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.244678 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kd5w\" (UniqueName: \"kubernetes.io/projected/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-kube-api-access-8kd5w\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-595gj\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.290925 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.794732 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj"] Nov 25 12:13:00 crc kubenswrapper[4706]: W1125 12:13:00.797545 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbaaa73b2_135d_4ce5_8e1a_4c7ffde4e639.slice/crio-0fb2c976e756b3a010eaf35475039fb78d0faa9a4125abed185a523f3fcbfd91 WatchSource:0}: Error finding container 0fb2c976e756b3a010eaf35475039fb78d0faa9a4125abed185a523f3fcbfd91: Status 404 returned error can't find the container with id 0fb2c976e756b3a010eaf35475039fb78d0faa9a4125abed185a523f3fcbfd91 Nov 25 12:13:00 crc kubenswrapper[4706]: I1125 12:13:00.918359 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" event={"ID":"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639","Type":"ContainerStarted","Data":"0fb2c976e756b3a010eaf35475039fb78d0faa9a4125abed185a523f3fcbfd91"} Nov 25 12:13:01 crc kubenswrapper[4706]: I1125 12:13:01.940069 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dv8c7" event={"ID":"fa689175-4255-45b5-8720-3b774731c07c","Type":"ContainerStarted","Data":"ce18a4dd69fd3a4ff8e4ca03daf61d4102369540086b037ac7daf40a10cab926"} Nov 25 12:13:02 crc kubenswrapper[4706]: I1125 12:13:02.961333 4706 generic.go:334] "Generic (PLEG): container finished" podID="fa689175-4255-45b5-8720-3b774731c07c" containerID="ce18a4dd69fd3a4ff8e4ca03daf61d4102369540086b037ac7daf40a10cab926" exitCode=0 Nov 25 12:13:02 crc kubenswrapper[4706]: I1125 12:13:02.961531 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dv8c7" event={"ID":"fa689175-4255-45b5-8720-3b774731c07c","Type":"ContainerDied","Data":"ce18a4dd69fd3a4ff8e4ca03daf61d4102369540086b037ac7daf40a10cab926"} Nov 25 12:13:02 crc kubenswrapper[4706]: I1125 12:13:02.965400 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" event={"ID":"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639","Type":"ContainerStarted","Data":"a7c7cdad87df0dfb52daf5e6517632ed55cd688511f09d2889e0cacd3d4d7cbb"} Nov 25 12:13:03 crc kubenswrapper[4706]: I1125 12:13:03.000223 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" podStartSLOduration=2.630431138 podStartE2EDuration="4.000201815s" podCreationTimestamp="2025-11-25 12:12:59 +0000 UTC" firstStartedPulling="2025-11-25 12:13:00.799910776 +0000 UTC m=+2189.714468157" lastFinishedPulling="2025-11-25 12:13:02.169681453 +0000 UTC m=+2191.084238834" observedRunningTime="2025-11-25 12:13:02.992645233 +0000 UTC m=+2191.907202614" watchObservedRunningTime="2025-11-25 12:13:03.000201815 +0000 UTC m=+2191.914759196" Nov 25 12:13:04 crc kubenswrapper[4706]: I1125 12:13:04.985935 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dv8c7" event={"ID":"fa689175-4255-45b5-8720-3b774731c07c","Type":"ContainerStarted","Data":"48c1fbce19577ca76712ceb56116473b63a24f25f8eef30063c14ae7e87f13e9"} Nov 25 12:13:05 crc kubenswrapper[4706]: I1125 12:13:05.006994 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dv8c7" podStartSLOduration=3.993822063 podStartE2EDuration="8.006973268s" podCreationTimestamp="2025-11-25 12:12:57 +0000 UTC" firstStartedPulling="2025-11-25 12:12:59.892723079 +0000 UTC m=+2188.807280460" lastFinishedPulling="2025-11-25 12:13:03.905874284 +0000 UTC m=+2192.820431665" observedRunningTime="2025-11-25 12:13:05.004400572 +0000 UTC m=+2193.918957953" watchObservedRunningTime="2025-11-25 12:13:05.006973268 +0000 UTC m=+2193.921530649" Nov 25 12:13:07 crc kubenswrapper[4706]: I1125 12:13:07.908475 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dv8c7" Nov 25 12:13:07 crc kubenswrapper[4706]: I1125 12:13:07.909132 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dv8c7" Nov 25 12:13:07 crc kubenswrapper[4706]: I1125 12:13:07.956561 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dv8c7" Nov 25 12:13:09 crc kubenswrapper[4706]: I1125 12:13:09.009194 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rk6bn"] Nov 25 12:13:09 crc kubenswrapper[4706]: I1125 12:13:09.022262 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rk6bn" Nov 25 12:13:09 crc kubenswrapper[4706]: I1125 12:13:09.023469 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rk6bn"] Nov 25 12:13:09 crc kubenswrapper[4706]: I1125 12:13:09.207717 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02035d98-7804-41cf-932a-64477769ac19-catalog-content\") pod \"community-operators-rk6bn\" (UID: \"02035d98-7804-41cf-932a-64477769ac19\") " pod="openshift-marketplace/community-operators-rk6bn" Nov 25 12:13:09 crc kubenswrapper[4706]: I1125 12:13:09.208057 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd5jj\" (UniqueName: \"kubernetes.io/projected/02035d98-7804-41cf-932a-64477769ac19-kube-api-access-gd5jj\") pod \"community-operators-rk6bn\" (UID: \"02035d98-7804-41cf-932a-64477769ac19\") " pod="openshift-marketplace/community-operators-rk6bn" Nov 25 12:13:09 crc kubenswrapper[4706]: I1125 12:13:09.208197 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02035d98-7804-41cf-932a-64477769ac19-utilities\") pod \"community-operators-rk6bn\" (UID: \"02035d98-7804-41cf-932a-64477769ac19\") " pod="openshift-marketplace/community-operators-rk6bn" Nov 25 12:13:09 crc kubenswrapper[4706]: I1125 12:13:09.309952 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd5jj\" (UniqueName: \"kubernetes.io/projected/02035d98-7804-41cf-932a-64477769ac19-kube-api-access-gd5jj\") pod \"community-operators-rk6bn\" (UID: \"02035d98-7804-41cf-932a-64477769ac19\") " pod="openshift-marketplace/community-operators-rk6bn" Nov 25 12:13:09 crc kubenswrapper[4706]: I1125 12:13:09.310023 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02035d98-7804-41cf-932a-64477769ac19-utilities\") pod \"community-operators-rk6bn\" (UID: \"02035d98-7804-41cf-932a-64477769ac19\") " pod="openshift-marketplace/community-operators-rk6bn" Nov 25 12:13:09 crc kubenswrapper[4706]: I1125 12:13:09.310140 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02035d98-7804-41cf-932a-64477769ac19-catalog-content\") pod \"community-operators-rk6bn\" (UID: \"02035d98-7804-41cf-932a-64477769ac19\") " pod="openshift-marketplace/community-operators-rk6bn" Nov 25 12:13:09 crc kubenswrapper[4706]: I1125 12:13:09.310605 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02035d98-7804-41cf-932a-64477769ac19-catalog-content\") pod \"community-operators-rk6bn\" (UID: \"02035d98-7804-41cf-932a-64477769ac19\") " pod="openshift-marketplace/community-operators-rk6bn" Nov 25 12:13:09 crc kubenswrapper[4706]: I1125 12:13:09.311057 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02035d98-7804-41cf-932a-64477769ac19-utilities\") pod \"community-operators-rk6bn\" (UID: \"02035d98-7804-41cf-932a-64477769ac19\") " pod="openshift-marketplace/community-operators-rk6bn" Nov 25 12:13:09 crc kubenswrapper[4706]: I1125 12:13:09.331641 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd5jj\" (UniqueName: \"kubernetes.io/projected/02035d98-7804-41cf-932a-64477769ac19-kube-api-access-gd5jj\") pod \"community-operators-rk6bn\" (UID: \"02035d98-7804-41cf-932a-64477769ac19\") " pod="openshift-marketplace/community-operators-rk6bn" Nov 25 12:13:09 crc kubenswrapper[4706]: I1125 12:13:09.350520 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rk6bn" Nov 25 12:13:09 crc kubenswrapper[4706]: I1125 12:13:09.939394 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rk6bn"] Nov 25 12:13:10 crc kubenswrapper[4706]: I1125 12:13:10.043204 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rk6bn" event={"ID":"02035d98-7804-41cf-932a-64477769ac19","Type":"ContainerStarted","Data":"d3580fb866ac06c9cb87da87813457dcaaa27e2bf9d7f3a2d98a7a1f90bee352"} Nov 25 12:13:11 crc kubenswrapper[4706]: I1125 12:13:11.064330 4706 generic.go:334] "Generic (PLEG): container finished" podID="02035d98-7804-41cf-932a-64477769ac19" containerID="7bc1ebf56e83f9dd67ba22bbaef292057bfc72e52a9c1717b1def9f4a1c03f6c" exitCode=0 Nov 25 12:13:11 crc kubenswrapper[4706]: I1125 12:13:11.064426 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rk6bn" event={"ID":"02035d98-7804-41cf-932a-64477769ac19","Type":"ContainerDied","Data":"7bc1ebf56e83f9dd67ba22bbaef292057bfc72e52a9c1717b1def9f4a1c03f6c"} Nov 25 12:13:13 crc kubenswrapper[4706]: I1125 12:13:13.087169 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rk6bn" event={"ID":"02035d98-7804-41cf-932a-64477769ac19","Type":"ContainerStarted","Data":"9de92af6911674c96e7c10a960cbae2f0214f3cbaecbfa8d9413e045073800ea"} Nov 25 12:13:14 crc kubenswrapper[4706]: I1125 12:13:14.098320 4706 generic.go:334] "Generic (PLEG): container finished" podID="02035d98-7804-41cf-932a-64477769ac19" containerID="9de92af6911674c96e7c10a960cbae2f0214f3cbaecbfa8d9413e045073800ea" exitCode=0 Nov 25 12:13:14 crc kubenswrapper[4706]: I1125 12:13:14.098373 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rk6bn" event={"ID":"02035d98-7804-41cf-932a-64477769ac19","Type":"ContainerDied","Data":"9de92af6911674c96e7c10a960cbae2f0214f3cbaecbfa8d9413e045073800ea"} Nov 25 12:13:15 crc kubenswrapper[4706]: I1125 12:13:15.113711 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rk6bn" event={"ID":"02035d98-7804-41cf-932a-64477769ac19","Type":"ContainerStarted","Data":"26ee2158b14ec7e196bafd2e2ef0f4c9c0316721cca10f3f4f030ceb03946602"} Nov 25 12:13:15 crc kubenswrapper[4706]: I1125 12:13:15.136756 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rk6bn" podStartSLOduration=3.674185084 podStartE2EDuration="7.136738148s" podCreationTimestamp="2025-11-25 12:13:08 +0000 UTC" firstStartedPulling="2025-11-25 12:13:11.067875501 +0000 UTC m=+2199.982432882" lastFinishedPulling="2025-11-25 12:13:14.530428565 +0000 UTC m=+2203.444985946" observedRunningTime="2025-11-25 12:13:15.136687326 +0000 UTC m=+2204.051244737" watchObservedRunningTime="2025-11-25 12:13:15.136738148 +0000 UTC m=+2204.051295539" Nov 25 12:13:17 crc kubenswrapper[4706]: I1125 12:13:17.961764 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dv8c7" Nov 25 12:13:18 crc kubenswrapper[4706]: I1125 12:13:18.007632 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dv8c7"] Nov 25 12:13:18 crc kubenswrapper[4706]: I1125 12:13:18.142511 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dv8c7" podUID="fa689175-4255-45b5-8720-3b774731c07c" containerName="registry-server" containerID="cri-o://48c1fbce19577ca76712ceb56116473b63a24f25f8eef30063c14ae7e87f13e9" gracePeriod=2 Nov 25 12:13:19 crc kubenswrapper[4706]: I1125 12:13:19.154396 4706 generic.go:334] "Generic (PLEG): container finished" podID="fa689175-4255-45b5-8720-3b774731c07c" containerID="48c1fbce19577ca76712ceb56116473b63a24f25f8eef30063c14ae7e87f13e9" exitCode=0 Nov 25 12:13:19 crc kubenswrapper[4706]: I1125 12:13:19.154444 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dv8c7" event={"ID":"fa689175-4255-45b5-8720-3b774731c07c","Type":"ContainerDied","Data":"48c1fbce19577ca76712ceb56116473b63a24f25f8eef30063c14ae7e87f13e9"} Nov 25 12:13:19 crc kubenswrapper[4706]: I1125 12:13:19.351610 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rk6bn" Nov 25 12:13:19 crc kubenswrapper[4706]: I1125 12:13:19.351895 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rk6bn" Nov 25 12:13:19 crc kubenswrapper[4706]: I1125 12:13:19.414666 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rk6bn" Nov 25 12:13:19 crc kubenswrapper[4706]: I1125 12:13:19.687664 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dv8c7" Nov 25 12:13:19 crc kubenswrapper[4706]: I1125 12:13:19.837032 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa689175-4255-45b5-8720-3b774731c07c-catalog-content\") pod \"fa689175-4255-45b5-8720-3b774731c07c\" (UID: \"fa689175-4255-45b5-8720-3b774731c07c\") " Nov 25 12:13:19 crc kubenswrapper[4706]: I1125 12:13:19.837293 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qljrm\" (UniqueName: \"kubernetes.io/projected/fa689175-4255-45b5-8720-3b774731c07c-kube-api-access-qljrm\") pod \"fa689175-4255-45b5-8720-3b774731c07c\" (UID: \"fa689175-4255-45b5-8720-3b774731c07c\") " Nov 25 12:13:19 crc kubenswrapper[4706]: I1125 12:13:19.837363 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa689175-4255-45b5-8720-3b774731c07c-utilities\") pod \"fa689175-4255-45b5-8720-3b774731c07c\" (UID: \"fa689175-4255-45b5-8720-3b774731c07c\") " Nov 25 12:13:19 crc kubenswrapper[4706]: I1125 12:13:19.839043 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa689175-4255-45b5-8720-3b774731c07c-utilities" (OuterVolumeSpecName: "utilities") pod "fa689175-4255-45b5-8720-3b774731c07c" (UID: "fa689175-4255-45b5-8720-3b774731c07c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:13:19 crc kubenswrapper[4706]: I1125 12:13:19.844124 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa689175-4255-45b5-8720-3b774731c07c-kube-api-access-qljrm" (OuterVolumeSpecName: "kube-api-access-qljrm") pod "fa689175-4255-45b5-8720-3b774731c07c" (UID: "fa689175-4255-45b5-8720-3b774731c07c"). InnerVolumeSpecName "kube-api-access-qljrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:13:19 crc kubenswrapper[4706]: I1125 12:13:19.887667 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa689175-4255-45b5-8720-3b774731c07c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa689175-4255-45b5-8720-3b774731c07c" (UID: "fa689175-4255-45b5-8720-3b774731c07c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:13:19 crc kubenswrapper[4706]: I1125 12:13:19.940230 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa689175-4255-45b5-8720-3b774731c07c-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:13:19 crc kubenswrapper[4706]: I1125 12:13:19.940282 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa689175-4255-45b5-8720-3b774731c07c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:13:19 crc kubenswrapper[4706]: I1125 12:13:19.940302 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qljrm\" (UniqueName: \"kubernetes.io/projected/fa689175-4255-45b5-8720-3b774731c07c-kube-api-access-qljrm\") on node \"crc\" DevicePath \"\"" Nov 25 12:13:20 crc kubenswrapper[4706]: I1125 12:13:20.175995 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dv8c7" event={"ID":"fa689175-4255-45b5-8720-3b774731c07c","Type":"ContainerDied","Data":"06f3057afbaacca9f9d272a5ad1854e801a6609e23b764d9353207964e5b6c50"} Nov 25 12:13:20 crc kubenswrapper[4706]: I1125 12:13:20.176029 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dv8c7" Nov 25 12:13:20 crc kubenswrapper[4706]: I1125 12:13:20.176079 4706 scope.go:117] "RemoveContainer" containerID="48c1fbce19577ca76712ceb56116473b63a24f25f8eef30063c14ae7e87f13e9" Nov 25 12:13:20 crc kubenswrapper[4706]: I1125 12:13:20.208687 4706 scope.go:117] "RemoveContainer" containerID="ce18a4dd69fd3a4ff8e4ca03daf61d4102369540086b037ac7daf40a10cab926" Nov 25 12:13:20 crc kubenswrapper[4706]: I1125 12:13:20.213187 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dv8c7"] Nov 25 12:13:20 crc kubenswrapper[4706]: I1125 12:13:20.226063 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dv8c7"] Nov 25 12:13:20 crc kubenswrapper[4706]: I1125 12:13:20.227653 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rk6bn" Nov 25 12:13:20 crc kubenswrapper[4706]: I1125 12:13:20.230828 4706 scope.go:117] "RemoveContainer" containerID="85599b8a470843a1e10443ee4bdd17c338d69c8383b52a9e65ea0700654eb5e9" Nov 25 12:13:21 crc kubenswrapper[4706]: I1125 12:13:21.933994 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa689175-4255-45b5-8720-3b774731c07c" path="/var/lib/kubelet/pods/fa689175-4255-45b5-8720-3b774731c07c/volumes" Nov 25 12:13:22 crc kubenswrapper[4706]: I1125 12:13:22.591135 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rk6bn"] Nov 25 12:13:22 crc kubenswrapper[4706]: I1125 12:13:22.591728 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rk6bn" podUID="02035d98-7804-41cf-932a-64477769ac19" containerName="registry-server" containerID="cri-o://26ee2158b14ec7e196bafd2e2ef0f4c9c0316721cca10f3f4f030ceb03946602" gracePeriod=2 Nov 25 12:13:23 crc kubenswrapper[4706]: I1125 12:13:23.192893 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rk6bn" Nov 25 12:13:23 crc kubenswrapper[4706]: I1125 12:13:23.211694 4706 generic.go:334] "Generic (PLEG): container finished" podID="02035d98-7804-41cf-932a-64477769ac19" containerID="26ee2158b14ec7e196bafd2e2ef0f4c9c0316721cca10f3f4f030ceb03946602" exitCode=0 Nov 25 12:13:23 crc kubenswrapper[4706]: I1125 12:13:23.212068 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rk6bn" event={"ID":"02035d98-7804-41cf-932a-64477769ac19","Type":"ContainerDied","Data":"26ee2158b14ec7e196bafd2e2ef0f4c9c0316721cca10f3f4f030ceb03946602"} Nov 25 12:13:23 crc kubenswrapper[4706]: I1125 12:13:23.212117 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rk6bn" event={"ID":"02035d98-7804-41cf-932a-64477769ac19","Type":"ContainerDied","Data":"d3580fb866ac06c9cb87da87813457dcaaa27e2bf9d7f3a2d98a7a1f90bee352"} Nov 25 12:13:23 crc kubenswrapper[4706]: I1125 12:13:23.212139 4706 scope.go:117] "RemoveContainer" containerID="26ee2158b14ec7e196bafd2e2ef0f4c9c0316721cca10f3f4f030ceb03946602" Nov 25 12:13:23 crc kubenswrapper[4706]: I1125 12:13:23.212346 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rk6bn" Nov 25 12:13:23 crc kubenswrapper[4706]: I1125 12:13:23.228551 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02035d98-7804-41cf-932a-64477769ac19-catalog-content\") pod \"02035d98-7804-41cf-932a-64477769ac19\" (UID: \"02035d98-7804-41cf-932a-64477769ac19\") " Nov 25 12:13:23 crc kubenswrapper[4706]: I1125 12:13:23.228598 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02035d98-7804-41cf-932a-64477769ac19-utilities\") pod \"02035d98-7804-41cf-932a-64477769ac19\" (UID: \"02035d98-7804-41cf-932a-64477769ac19\") " Nov 25 12:13:23 crc kubenswrapper[4706]: I1125 12:13:23.228657 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd5jj\" (UniqueName: \"kubernetes.io/projected/02035d98-7804-41cf-932a-64477769ac19-kube-api-access-gd5jj\") pod \"02035d98-7804-41cf-932a-64477769ac19\" (UID: \"02035d98-7804-41cf-932a-64477769ac19\") " Nov 25 12:13:23 crc kubenswrapper[4706]: I1125 12:13:23.237440 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02035d98-7804-41cf-932a-64477769ac19-kube-api-access-gd5jj" (OuterVolumeSpecName: "kube-api-access-gd5jj") pod "02035d98-7804-41cf-932a-64477769ac19" (UID: "02035d98-7804-41cf-932a-64477769ac19"). InnerVolumeSpecName "kube-api-access-gd5jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:13:23 crc kubenswrapper[4706]: I1125 12:13:23.239398 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02035d98-7804-41cf-932a-64477769ac19-utilities" (OuterVolumeSpecName: "utilities") pod "02035d98-7804-41cf-932a-64477769ac19" (UID: "02035d98-7804-41cf-932a-64477769ac19"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:13:23 crc kubenswrapper[4706]: I1125 12:13:23.255473 4706 scope.go:117] "RemoveContainer" containerID="9de92af6911674c96e7c10a960cbae2f0214f3cbaecbfa8d9413e045073800ea" Nov 25 12:13:23 crc kubenswrapper[4706]: I1125 12:13:23.304246 4706 scope.go:117] "RemoveContainer" containerID="7bc1ebf56e83f9dd67ba22bbaef292057bfc72e52a9c1717b1def9f4a1c03f6c" Nov 25 12:13:23 crc kubenswrapper[4706]: I1125 12:13:23.305460 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02035d98-7804-41cf-932a-64477769ac19-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02035d98-7804-41cf-932a-64477769ac19" (UID: "02035d98-7804-41cf-932a-64477769ac19"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:13:23 crc kubenswrapper[4706]: I1125 12:13:23.331528 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02035d98-7804-41cf-932a-64477769ac19-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:13:23 crc kubenswrapper[4706]: I1125 12:13:23.331548 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02035d98-7804-41cf-932a-64477769ac19-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:13:23 crc kubenswrapper[4706]: I1125 12:13:23.331558 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd5jj\" (UniqueName: \"kubernetes.io/projected/02035d98-7804-41cf-932a-64477769ac19-kube-api-access-gd5jj\") on node \"crc\" DevicePath \"\"" Nov 25 12:13:23 crc kubenswrapper[4706]: I1125 12:13:23.351189 4706 scope.go:117] "RemoveContainer" containerID="26ee2158b14ec7e196bafd2e2ef0f4c9c0316721cca10f3f4f030ceb03946602" Nov 25 12:13:23 crc kubenswrapper[4706]: E1125 12:13:23.352439 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26ee2158b14ec7e196bafd2e2ef0f4c9c0316721cca10f3f4f030ceb03946602\": container with ID starting with 26ee2158b14ec7e196bafd2e2ef0f4c9c0316721cca10f3f4f030ceb03946602 not found: ID does not exist" containerID="26ee2158b14ec7e196bafd2e2ef0f4c9c0316721cca10f3f4f030ceb03946602" Nov 25 12:13:23 crc kubenswrapper[4706]: I1125 12:13:23.352516 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26ee2158b14ec7e196bafd2e2ef0f4c9c0316721cca10f3f4f030ceb03946602"} err="failed to get container status \"26ee2158b14ec7e196bafd2e2ef0f4c9c0316721cca10f3f4f030ceb03946602\": rpc error: code = NotFound desc = could not find container \"26ee2158b14ec7e196bafd2e2ef0f4c9c0316721cca10f3f4f030ceb03946602\": container with ID starting with 26ee2158b14ec7e196bafd2e2ef0f4c9c0316721cca10f3f4f030ceb03946602 not found: ID does not exist" Nov 25 12:13:23 crc kubenswrapper[4706]: I1125 12:13:23.352576 4706 scope.go:117] "RemoveContainer" containerID="9de92af6911674c96e7c10a960cbae2f0214f3cbaecbfa8d9413e045073800ea" Nov 25 12:13:23 crc kubenswrapper[4706]: E1125 12:13:23.352963 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9de92af6911674c96e7c10a960cbae2f0214f3cbaecbfa8d9413e045073800ea\": container with ID starting with 9de92af6911674c96e7c10a960cbae2f0214f3cbaecbfa8d9413e045073800ea not found: ID does not exist" containerID="9de92af6911674c96e7c10a960cbae2f0214f3cbaecbfa8d9413e045073800ea" Nov 25 12:13:23 crc kubenswrapper[4706]: I1125 12:13:23.353009 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9de92af6911674c96e7c10a960cbae2f0214f3cbaecbfa8d9413e045073800ea"} err="failed to get container status \"9de92af6911674c96e7c10a960cbae2f0214f3cbaecbfa8d9413e045073800ea\": rpc error: code = NotFound desc = could not find container \"9de92af6911674c96e7c10a960cbae2f0214f3cbaecbfa8d9413e045073800ea\": container with ID starting with 9de92af6911674c96e7c10a960cbae2f0214f3cbaecbfa8d9413e045073800ea not found: ID does not exist" Nov 25 12:13:23 crc kubenswrapper[4706]: I1125 12:13:23.353040 4706 scope.go:117] "RemoveContainer" containerID="7bc1ebf56e83f9dd67ba22bbaef292057bfc72e52a9c1717b1def9f4a1c03f6c" Nov 25 12:13:23 crc kubenswrapper[4706]: E1125 12:13:23.353366 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bc1ebf56e83f9dd67ba22bbaef292057bfc72e52a9c1717b1def9f4a1c03f6c\": container with ID starting with 7bc1ebf56e83f9dd67ba22bbaef292057bfc72e52a9c1717b1def9f4a1c03f6c not found: ID does not exist" containerID="7bc1ebf56e83f9dd67ba22bbaef292057bfc72e52a9c1717b1def9f4a1c03f6c" Nov 25 12:13:23 crc kubenswrapper[4706]: I1125 12:13:23.353420 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bc1ebf56e83f9dd67ba22bbaef292057bfc72e52a9c1717b1def9f4a1c03f6c"} err="failed to get container status \"7bc1ebf56e83f9dd67ba22bbaef292057bfc72e52a9c1717b1def9f4a1c03f6c\": rpc error: code = NotFound desc = could not find container \"7bc1ebf56e83f9dd67ba22bbaef292057bfc72e52a9c1717b1def9f4a1c03f6c\": container with ID starting with 7bc1ebf56e83f9dd67ba22bbaef292057bfc72e52a9c1717b1def9f4a1c03f6c not found: ID does not exist" Nov 25 12:13:23 crc kubenswrapper[4706]: I1125 12:13:23.546238 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rk6bn"] Nov 25 12:13:23 crc kubenswrapper[4706]: I1125 12:13:23.554169 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rk6bn"] Nov 25 12:13:23 crc kubenswrapper[4706]: I1125 12:13:23.935090 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02035d98-7804-41cf-932a-64477769ac19" path="/var/lib/kubelet/pods/02035d98-7804-41cf-932a-64477769ac19/volumes" Nov 25 12:13:31 crc kubenswrapper[4706]: I1125 12:13:31.125259 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:13:31 crc kubenswrapper[4706]: I1125 12:13:31.125864 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:13:41 crc kubenswrapper[4706]: I1125 12:13:41.383803 4706 generic.go:334] "Generic (PLEG): container finished" podID="baaa73b2-135d-4ce5-8e1a-4c7ffde4e639" containerID="a7c7cdad87df0dfb52daf5e6517632ed55cd688511f09d2889e0cacd3d4d7cbb" exitCode=0 Nov 25 12:13:41 crc kubenswrapper[4706]: I1125 12:13:41.383885 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" event={"ID":"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639","Type":"ContainerDied","Data":"a7c7cdad87df0dfb52daf5e6517632ed55cd688511f09d2889e0cacd3d4d7cbb"} Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.816822 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.892999 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-ssh-key\") pod \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.893076 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-openstack-edpm-ipam-ovn-default-certs-0\") pod \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.893144 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-neutron-metadata-combined-ca-bundle\") pod \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.893173 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kd5w\" (UniqueName: \"kubernetes.io/projected/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-kube-api-access-8kd5w\") pod \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.893197 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-inventory\") pod \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.893229 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-nova-combined-ca-bundle\") pod \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.893264 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-telemetry-combined-ca-bundle\") pod \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.893292 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-repo-setup-combined-ca-bundle\") pod \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.893398 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-libvirt-combined-ca-bundle\") pod \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.893454 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.893481 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.893532 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-bootstrap-combined-ca-bundle\") pod \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.893556 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.893578 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-ovn-combined-ca-bundle\") pod \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\" (UID: \"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639\") " Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.901152 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "baaa73b2-135d-4ce5-8e1a-4c7ffde4e639" (UID: "baaa73b2-135d-4ce5-8e1a-4c7ffde4e639"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.901522 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "baaa73b2-135d-4ce5-8e1a-4c7ffde4e639" (UID: "baaa73b2-135d-4ce5-8e1a-4c7ffde4e639"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.901970 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "baaa73b2-135d-4ce5-8e1a-4c7ffde4e639" (UID: "baaa73b2-135d-4ce5-8e1a-4c7ffde4e639"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.902189 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "baaa73b2-135d-4ce5-8e1a-4c7ffde4e639" (UID: "baaa73b2-135d-4ce5-8e1a-4c7ffde4e639"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.902505 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "baaa73b2-135d-4ce5-8e1a-4c7ffde4e639" (UID: "baaa73b2-135d-4ce5-8e1a-4c7ffde4e639"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.902783 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "baaa73b2-135d-4ce5-8e1a-4c7ffde4e639" (UID: "baaa73b2-135d-4ce5-8e1a-4c7ffde4e639"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.903021 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "baaa73b2-135d-4ce5-8e1a-4c7ffde4e639" (UID: "baaa73b2-135d-4ce5-8e1a-4c7ffde4e639"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.905981 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "baaa73b2-135d-4ce5-8e1a-4c7ffde4e639" (UID: "baaa73b2-135d-4ce5-8e1a-4c7ffde4e639"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.914242 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "baaa73b2-135d-4ce5-8e1a-4c7ffde4e639" (UID: "baaa73b2-135d-4ce5-8e1a-4c7ffde4e639"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.914367 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "baaa73b2-135d-4ce5-8e1a-4c7ffde4e639" (UID: "baaa73b2-135d-4ce5-8e1a-4c7ffde4e639"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.914460 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-kube-api-access-8kd5w" (OuterVolumeSpecName: "kube-api-access-8kd5w") pod "baaa73b2-135d-4ce5-8e1a-4c7ffde4e639" (UID: "baaa73b2-135d-4ce5-8e1a-4c7ffde4e639"). InnerVolumeSpecName "kube-api-access-8kd5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.914520 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "baaa73b2-135d-4ce5-8e1a-4c7ffde4e639" (UID: "baaa73b2-135d-4ce5-8e1a-4c7ffde4e639"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.929721 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-inventory" (OuterVolumeSpecName: "inventory") pod "baaa73b2-135d-4ce5-8e1a-4c7ffde4e639" (UID: "baaa73b2-135d-4ce5-8e1a-4c7ffde4e639"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.941831 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "baaa73b2-135d-4ce5-8e1a-4c7ffde4e639" (UID: "baaa73b2-135d-4ce5-8e1a-4c7ffde4e639"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.996501 4706 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.996546 4706 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.996560 4706 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.996573 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kd5w\" (UniqueName: \"kubernetes.io/projected/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-kube-api-access-8kd5w\") on node \"crc\" DevicePath \"\"" Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.996586 4706 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.996598 4706 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.996609 4706 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.996620 4706 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.996631 4706 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.996700 4706 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.996718 4706 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.996732 4706 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.996748 4706 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:13:42 crc kubenswrapper[4706]: I1125 12:13:42.996761 4706 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaa73b2-135d-4ce5-8e1a-4c7ffde4e639-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:13:43 crc kubenswrapper[4706]: I1125 12:13:43.406881 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" event={"ID":"baaa73b2-135d-4ce5-8e1a-4c7ffde4e639","Type":"ContainerDied","Data":"0fb2c976e756b3a010eaf35475039fb78d0faa9a4125abed185a523f3fcbfd91"} Nov 25 12:13:43 crc kubenswrapper[4706]: I1125 12:13:43.406942 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fb2c976e756b3a010eaf35475039fb78d0faa9a4125abed185a523f3fcbfd91" Nov 25 12:13:43 crc kubenswrapper[4706]: I1125 12:13:43.406963 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-595gj" Nov 25 12:13:43 crc kubenswrapper[4706]: I1125 12:13:43.540506 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kxnq"] Nov 25 12:13:43 crc kubenswrapper[4706]: E1125 12:13:43.540943 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02035d98-7804-41cf-932a-64477769ac19" containerName="extract-utilities" Nov 25 12:13:43 crc kubenswrapper[4706]: I1125 12:13:43.540963 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="02035d98-7804-41cf-932a-64477769ac19" containerName="extract-utilities" Nov 25 12:13:43 crc kubenswrapper[4706]: E1125 12:13:43.540981 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa689175-4255-45b5-8720-3b774731c07c" containerName="extract-content" Nov 25 12:13:43 crc kubenswrapper[4706]: I1125 12:13:43.540987 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa689175-4255-45b5-8720-3b774731c07c" containerName="extract-content" Nov 25 12:13:43 crc kubenswrapper[4706]: E1125 12:13:43.540995 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02035d98-7804-41cf-932a-64477769ac19" containerName="registry-server" Nov 25 12:13:43 crc kubenswrapper[4706]: I1125 12:13:43.541001 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="02035d98-7804-41cf-932a-64477769ac19" containerName="registry-server" Nov 25 12:13:43 crc kubenswrapper[4706]: E1125 12:13:43.541013 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa689175-4255-45b5-8720-3b774731c07c" containerName="registry-server" Nov 25 12:13:43 crc kubenswrapper[4706]: I1125 12:13:43.541019 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa689175-4255-45b5-8720-3b774731c07c" containerName="registry-server" Nov 25 12:13:43 crc kubenswrapper[4706]: E1125 12:13:43.541053 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baaa73b2-135d-4ce5-8e1a-4c7ffde4e639" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 25 12:13:43 crc kubenswrapper[4706]: I1125 12:13:43.541066 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="baaa73b2-135d-4ce5-8e1a-4c7ffde4e639" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 25 12:13:43 crc kubenswrapper[4706]: E1125 12:13:43.541079 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa689175-4255-45b5-8720-3b774731c07c" containerName="extract-utilities" Nov 25 12:13:43 crc kubenswrapper[4706]: I1125 12:13:43.541088 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa689175-4255-45b5-8720-3b774731c07c" containerName="extract-utilities" Nov 25 12:13:43 crc kubenswrapper[4706]: E1125 12:13:43.541102 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02035d98-7804-41cf-932a-64477769ac19" containerName="extract-content" Nov 25 12:13:43 crc kubenswrapper[4706]: I1125 12:13:43.541110 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="02035d98-7804-41cf-932a-64477769ac19" containerName="extract-content" Nov 25 12:13:43 crc kubenswrapper[4706]: I1125 12:13:43.541290 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="baaa73b2-135d-4ce5-8e1a-4c7ffde4e639" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 25 12:13:43 crc kubenswrapper[4706]: I1125 12:13:43.541333 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa689175-4255-45b5-8720-3b774731c07c" containerName="registry-server" Nov 25 12:13:43 crc kubenswrapper[4706]: I1125 12:13:43.541360 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="02035d98-7804-41cf-932a-64477769ac19" containerName="registry-server" Nov 25 12:13:43 crc kubenswrapper[4706]: I1125 12:13:43.541980 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kxnq" Nov 25 12:13:43 crc kubenswrapper[4706]: I1125 12:13:43.544782 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 12:13:43 crc kubenswrapper[4706]: I1125 12:13:43.546266 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 12:13:43 crc kubenswrapper[4706]: I1125 12:13:43.546488 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:13:43 crc kubenswrapper[4706]: W1125 12:13:43.546915 4706 reflector.go:561] object-"openstack"/"ovncontroller-config": failed to list *v1.ConfigMap: configmaps "ovncontroller-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Nov 25 12:13:43 crc kubenswrapper[4706]: E1125 12:13:43.546960 4706 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovncontroller-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovncontroller-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 12:13:43 crc kubenswrapper[4706]: I1125 12:13:43.549748 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r8qqp" Nov 25 12:13:43 crc kubenswrapper[4706]: I1125 12:13:43.608296 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97dd7a8b-3605-49a2-ad4d-72dd946605aa-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6kxnq\" (UID: \"97dd7a8b-3605-49a2-ad4d-72dd946605aa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kxnq" Nov 25 12:13:43 crc kubenswrapper[4706]: I1125 12:13:43.608624 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97dd7a8b-3605-49a2-ad4d-72dd946605aa-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6kxnq\" (UID: \"97dd7a8b-3605-49a2-ad4d-72dd946605aa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kxnq" Nov 25 12:13:43 crc kubenswrapper[4706]: I1125 12:13:43.608677 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwww4\" (UniqueName: \"kubernetes.io/projected/97dd7a8b-3605-49a2-ad4d-72dd946605aa-kube-api-access-nwww4\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6kxnq\" (UID: \"97dd7a8b-3605-49a2-ad4d-72dd946605aa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kxnq" Nov 25 12:13:43 crc kubenswrapper[4706]: I1125 12:13:43.608729 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/97dd7a8b-3605-49a2-ad4d-72dd946605aa-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6kxnq\" (UID: \"97dd7a8b-3605-49a2-ad4d-72dd946605aa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kxnq" Nov 25 12:13:43 crc kubenswrapper[4706]: I1125 12:13:43.608755 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97dd7a8b-3605-49a2-ad4d-72dd946605aa-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6kxnq\" (UID: \"97dd7a8b-3605-49a2-ad4d-72dd946605aa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kxnq" Nov 25 12:13:43 crc kubenswrapper[4706]: I1125 12:13:43.620900 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kxnq"] Nov 25 12:13:43 crc kubenswrapper[4706]: I1125 12:13:43.710951 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97dd7a8b-3605-49a2-ad4d-72dd946605aa-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6kxnq\" (UID: \"97dd7a8b-3605-49a2-ad4d-72dd946605aa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kxnq" Nov 25 12:13:43 crc kubenswrapper[4706]: I1125 12:13:43.711408 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97dd7a8b-3605-49a2-ad4d-72dd946605aa-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6kxnq\" (UID: \"97dd7a8b-3605-49a2-ad4d-72dd946605aa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kxnq" Nov 25 12:13:43 crc kubenswrapper[4706]: I1125 12:13:43.711516 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwww4\" (UniqueName: \"kubernetes.io/projected/97dd7a8b-3605-49a2-ad4d-72dd946605aa-kube-api-access-nwww4\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6kxnq\" (UID: \"97dd7a8b-3605-49a2-ad4d-72dd946605aa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kxnq" Nov 25 12:13:43 crc kubenswrapper[4706]: I1125 12:13:43.711602 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/97dd7a8b-3605-49a2-ad4d-72dd946605aa-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6kxnq\" (UID: \"97dd7a8b-3605-49a2-ad4d-72dd946605aa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kxnq" Nov 25 12:13:43 crc kubenswrapper[4706]: I1125 12:13:43.711687 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97dd7a8b-3605-49a2-ad4d-72dd946605aa-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6kxnq\" (UID: \"97dd7a8b-3605-49a2-ad4d-72dd946605aa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kxnq" Nov 25 12:13:43 crc kubenswrapper[4706]: I1125 12:13:43.717109 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97dd7a8b-3605-49a2-ad4d-72dd946605aa-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6kxnq\" (UID: \"97dd7a8b-3605-49a2-ad4d-72dd946605aa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kxnq" Nov 25 12:13:43 crc kubenswrapper[4706]: I1125 12:13:43.718032 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97dd7a8b-3605-49a2-ad4d-72dd946605aa-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6kxnq\" (UID: \"97dd7a8b-3605-49a2-ad4d-72dd946605aa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kxnq" Nov 25 12:13:43 crc kubenswrapper[4706]: I1125 12:13:43.718928 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97dd7a8b-3605-49a2-ad4d-72dd946605aa-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6kxnq\" (UID: \"97dd7a8b-3605-49a2-ad4d-72dd946605aa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kxnq" Nov 25 12:13:43 crc kubenswrapper[4706]: I1125 12:13:43.733616 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwww4\" (UniqueName: \"kubernetes.io/projected/97dd7a8b-3605-49a2-ad4d-72dd946605aa-kube-api-access-nwww4\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6kxnq\" (UID: \"97dd7a8b-3605-49a2-ad4d-72dd946605aa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kxnq" Nov 25 12:13:44 crc kubenswrapper[4706]: E1125 12:13:44.712617 4706 configmap.go:193] Couldn't get configMap openstack/ovncontroller-config: failed to sync configmap cache: timed out waiting for the condition Nov 25 12:13:44 crc kubenswrapper[4706]: E1125 12:13:44.712711 4706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/97dd7a8b-3605-49a2-ad4d-72dd946605aa-ovncontroller-config-0 podName:97dd7a8b-3605-49a2-ad4d-72dd946605aa nodeName:}" failed. No retries permitted until 2025-11-25 12:13:45.212691265 +0000 UTC m=+2234.127248646 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovncontroller-config-0" (UniqueName: "kubernetes.io/configmap/97dd7a8b-3605-49a2-ad4d-72dd946605aa-ovncontroller-config-0") pod "ovn-edpm-deployment-openstack-edpm-ipam-6kxnq" (UID: "97dd7a8b-3605-49a2-ad4d-72dd946605aa") : failed to sync configmap cache: timed out waiting for the condition Nov 25 12:13:44 crc kubenswrapper[4706]: I1125 12:13:44.766188 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 25 12:13:45 crc kubenswrapper[4706]: I1125 12:13:45.240846 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/97dd7a8b-3605-49a2-ad4d-72dd946605aa-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6kxnq\" (UID: \"97dd7a8b-3605-49a2-ad4d-72dd946605aa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kxnq" Nov 25 12:13:45 crc kubenswrapper[4706]: I1125 12:13:45.241718 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/97dd7a8b-3605-49a2-ad4d-72dd946605aa-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6kxnq\" (UID: \"97dd7a8b-3605-49a2-ad4d-72dd946605aa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kxnq" Nov 25 12:13:45 crc kubenswrapper[4706]: I1125 12:13:45.360080 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kxnq" Nov 25 12:13:45 crc kubenswrapper[4706]: I1125 12:13:45.958379 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kxnq"] Nov 25 12:13:46 crc kubenswrapper[4706]: I1125 12:13:46.443793 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kxnq" event={"ID":"97dd7a8b-3605-49a2-ad4d-72dd946605aa","Type":"ContainerStarted","Data":"98d8dac7e732de1010aaec61c06a0d0c5be0856b238155243442cf1a84ae3500"} Nov 25 12:13:47 crc kubenswrapper[4706]: I1125 12:13:47.459355 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kxnq" event={"ID":"97dd7a8b-3605-49a2-ad4d-72dd946605aa","Type":"ContainerStarted","Data":"3678c9e3f4787f1f8f052762730c23ae83ae8a34bbbadb14772b928471a05c86"} Nov 25 12:13:47 crc kubenswrapper[4706]: I1125 12:13:47.479226 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kxnq" podStartSLOduration=3.904936848 podStartE2EDuration="4.479205465s" podCreationTimestamp="2025-11-25 12:13:43 +0000 UTC" firstStartedPulling="2025-11-25 12:13:45.962765518 +0000 UTC m=+2234.877322899" lastFinishedPulling="2025-11-25 12:13:46.537034125 +0000 UTC m=+2235.451591516" observedRunningTime="2025-11-25 12:13:47.476030144 +0000 UTC m=+2236.390587545" watchObservedRunningTime="2025-11-25 12:13:47.479205465 +0000 UTC m=+2236.393762846" Nov 25 12:13:58 crc kubenswrapper[4706]: I1125 12:13:58.727518 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sgrbj"] Nov 25 12:13:58 crc kubenswrapper[4706]: I1125 12:13:58.730534 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sgrbj" Nov 25 12:13:58 crc kubenswrapper[4706]: I1125 12:13:58.743692 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sgrbj"] Nov 25 12:13:58 crc kubenswrapper[4706]: I1125 12:13:58.844274 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ddcv\" (UniqueName: \"kubernetes.io/projected/7203cd75-4cd1-4aa8-8d6a-0e882e8c2887-kube-api-access-5ddcv\") pod \"redhat-operators-sgrbj\" (UID: \"7203cd75-4cd1-4aa8-8d6a-0e882e8c2887\") " pod="openshift-marketplace/redhat-operators-sgrbj" Nov 25 12:13:58 crc kubenswrapper[4706]: I1125 12:13:58.844437 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7203cd75-4cd1-4aa8-8d6a-0e882e8c2887-catalog-content\") pod \"redhat-operators-sgrbj\" (UID: \"7203cd75-4cd1-4aa8-8d6a-0e882e8c2887\") " pod="openshift-marketplace/redhat-operators-sgrbj" Nov 25 12:13:58 crc kubenswrapper[4706]: I1125 12:13:58.844490 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7203cd75-4cd1-4aa8-8d6a-0e882e8c2887-utilities\") pod \"redhat-operators-sgrbj\" (UID: \"7203cd75-4cd1-4aa8-8d6a-0e882e8c2887\") " pod="openshift-marketplace/redhat-operators-sgrbj" Nov 25 12:13:58 crc kubenswrapper[4706]: I1125 12:13:58.946776 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ddcv\" (UniqueName: \"kubernetes.io/projected/7203cd75-4cd1-4aa8-8d6a-0e882e8c2887-kube-api-access-5ddcv\") pod \"redhat-operators-sgrbj\" (UID: \"7203cd75-4cd1-4aa8-8d6a-0e882e8c2887\") " pod="openshift-marketplace/redhat-operators-sgrbj" Nov 25 12:13:58 crc kubenswrapper[4706]: I1125 12:13:58.946878 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7203cd75-4cd1-4aa8-8d6a-0e882e8c2887-catalog-content\") pod \"redhat-operators-sgrbj\" (UID: \"7203cd75-4cd1-4aa8-8d6a-0e882e8c2887\") " pod="openshift-marketplace/redhat-operators-sgrbj" Nov 25 12:13:58 crc kubenswrapper[4706]: I1125 12:13:58.946923 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7203cd75-4cd1-4aa8-8d6a-0e882e8c2887-utilities\") pod \"redhat-operators-sgrbj\" (UID: \"7203cd75-4cd1-4aa8-8d6a-0e882e8c2887\") " pod="openshift-marketplace/redhat-operators-sgrbj" Nov 25 12:13:58 crc kubenswrapper[4706]: I1125 12:13:58.947403 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7203cd75-4cd1-4aa8-8d6a-0e882e8c2887-utilities\") pod \"redhat-operators-sgrbj\" (UID: \"7203cd75-4cd1-4aa8-8d6a-0e882e8c2887\") " pod="openshift-marketplace/redhat-operators-sgrbj" Nov 25 12:13:58 crc kubenswrapper[4706]: I1125 12:13:58.947986 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7203cd75-4cd1-4aa8-8d6a-0e882e8c2887-catalog-content\") pod \"redhat-operators-sgrbj\" (UID: \"7203cd75-4cd1-4aa8-8d6a-0e882e8c2887\") " pod="openshift-marketplace/redhat-operators-sgrbj" Nov 25 12:13:58 crc kubenswrapper[4706]: I1125 12:13:58.967289 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ddcv\" (UniqueName: \"kubernetes.io/projected/7203cd75-4cd1-4aa8-8d6a-0e882e8c2887-kube-api-access-5ddcv\") pod \"redhat-operators-sgrbj\" (UID: \"7203cd75-4cd1-4aa8-8d6a-0e882e8c2887\") " pod="openshift-marketplace/redhat-operators-sgrbj" Nov 25 12:13:59 crc kubenswrapper[4706]: I1125 12:13:59.114038 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sgrbj" Nov 25 12:13:59 crc kubenswrapper[4706]: W1125 12:13:59.613952 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7203cd75_4cd1_4aa8_8d6a_0e882e8c2887.slice/crio-8d4458bc5bf0f632649b7e20f1440ae9154fef7d71956ebbf81fb0d277509229 WatchSource:0}: Error finding container 8d4458bc5bf0f632649b7e20f1440ae9154fef7d71956ebbf81fb0d277509229: Status 404 returned error can't find the container with id 8d4458bc5bf0f632649b7e20f1440ae9154fef7d71956ebbf81fb0d277509229 Nov 25 12:13:59 crc kubenswrapper[4706]: I1125 12:13:59.631228 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sgrbj"] Nov 25 12:14:00 crc kubenswrapper[4706]: I1125 12:14:00.604639 4706 generic.go:334] "Generic (PLEG): container finished" podID="7203cd75-4cd1-4aa8-8d6a-0e882e8c2887" containerID="a0c808c9e534f11af864fd6af549d6ed8d3483c2765ebb535316c3d3147eaa49" exitCode=0 Nov 25 12:14:00 crc kubenswrapper[4706]: I1125 12:14:00.604692 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgrbj" event={"ID":"7203cd75-4cd1-4aa8-8d6a-0e882e8c2887","Type":"ContainerDied","Data":"a0c808c9e534f11af864fd6af549d6ed8d3483c2765ebb535316c3d3147eaa49"} Nov 25 12:14:00 crc kubenswrapper[4706]: I1125 12:14:00.604950 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgrbj" event={"ID":"7203cd75-4cd1-4aa8-8d6a-0e882e8c2887","Type":"ContainerStarted","Data":"8d4458bc5bf0f632649b7e20f1440ae9154fef7d71956ebbf81fb0d277509229"} Nov 25 12:14:01 crc kubenswrapper[4706]: I1125 12:14:01.124888 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:14:01 crc kubenswrapper[4706]: I1125 12:14:01.125165 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:14:02 crc kubenswrapper[4706]: I1125 12:14:02.630213 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgrbj" event={"ID":"7203cd75-4cd1-4aa8-8d6a-0e882e8c2887","Type":"ContainerStarted","Data":"5759752f0ed5f292625659bf9bac06ba658cdc141adaf579ec4c347ac1540735"} Nov 25 12:14:12 crc kubenswrapper[4706]: I1125 12:14:12.756000 4706 generic.go:334] "Generic (PLEG): container finished" podID="7203cd75-4cd1-4aa8-8d6a-0e882e8c2887" containerID="5759752f0ed5f292625659bf9bac06ba658cdc141adaf579ec4c347ac1540735" exitCode=0 Nov 25 12:14:12 crc kubenswrapper[4706]: I1125 12:14:12.756100 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgrbj" event={"ID":"7203cd75-4cd1-4aa8-8d6a-0e882e8c2887","Type":"ContainerDied","Data":"5759752f0ed5f292625659bf9bac06ba658cdc141adaf579ec4c347ac1540735"} Nov 25 12:14:14 crc kubenswrapper[4706]: I1125 12:14:14.778065 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgrbj" event={"ID":"7203cd75-4cd1-4aa8-8d6a-0e882e8c2887","Type":"ContainerStarted","Data":"8c14d18951a981cb771f315932577bfe10e73cee432cbf13e9e51a9c50f9f677"} Nov 25 12:14:14 crc kubenswrapper[4706]: I1125 12:14:14.803778 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sgrbj" podStartSLOduration=3.918118061 podStartE2EDuration="16.80375784s" podCreationTimestamp="2025-11-25 12:13:58 +0000 UTC" firstStartedPulling="2025-11-25 12:14:00.60616905 +0000 UTC m=+2249.520726431" lastFinishedPulling="2025-11-25 12:14:13.491808819 +0000 UTC m=+2262.406366210" observedRunningTime="2025-11-25 12:14:14.799562033 +0000 UTC m=+2263.714119424" watchObservedRunningTime="2025-11-25 12:14:14.80375784 +0000 UTC m=+2263.718315221" Nov 25 12:14:19 crc kubenswrapper[4706]: I1125 12:14:19.115679 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sgrbj" Nov 25 12:14:19 crc kubenswrapper[4706]: I1125 12:14:19.116391 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sgrbj" Nov 25 12:14:19 crc kubenswrapper[4706]: I1125 12:14:19.166841 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sgrbj" Nov 25 12:14:19 crc kubenswrapper[4706]: I1125 12:14:19.881486 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sgrbj" Nov 25 12:14:19 crc kubenswrapper[4706]: I1125 12:14:19.939768 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sgrbj"] Nov 25 12:14:21 crc kubenswrapper[4706]: I1125 12:14:21.842125 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sgrbj" podUID="7203cd75-4cd1-4aa8-8d6a-0e882e8c2887" containerName="registry-server" containerID="cri-o://8c14d18951a981cb771f315932577bfe10e73cee432cbf13e9e51a9c50f9f677" gracePeriod=2 Nov 25 12:14:22 crc kubenswrapper[4706]: I1125 12:14:22.291921 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sgrbj" Nov 25 12:14:22 crc kubenswrapper[4706]: I1125 12:14:22.323081 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ddcv\" (UniqueName: \"kubernetes.io/projected/7203cd75-4cd1-4aa8-8d6a-0e882e8c2887-kube-api-access-5ddcv\") pod \"7203cd75-4cd1-4aa8-8d6a-0e882e8c2887\" (UID: \"7203cd75-4cd1-4aa8-8d6a-0e882e8c2887\") " Nov 25 12:14:22 crc kubenswrapper[4706]: I1125 12:14:22.323168 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7203cd75-4cd1-4aa8-8d6a-0e882e8c2887-utilities\") pod \"7203cd75-4cd1-4aa8-8d6a-0e882e8c2887\" (UID: \"7203cd75-4cd1-4aa8-8d6a-0e882e8c2887\") " Nov 25 12:14:22 crc kubenswrapper[4706]: I1125 12:14:22.323376 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7203cd75-4cd1-4aa8-8d6a-0e882e8c2887-catalog-content\") pod \"7203cd75-4cd1-4aa8-8d6a-0e882e8c2887\" (UID: \"7203cd75-4cd1-4aa8-8d6a-0e882e8c2887\") " Nov 25 12:14:22 crc kubenswrapper[4706]: I1125 12:14:22.324279 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7203cd75-4cd1-4aa8-8d6a-0e882e8c2887-utilities" (OuterVolumeSpecName: "utilities") pod "7203cd75-4cd1-4aa8-8d6a-0e882e8c2887" (UID: "7203cd75-4cd1-4aa8-8d6a-0e882e8c2887"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:14:22 crc kubenswrapper[4706]: I1125 12:14:22.329855 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7203cd75-4cd1-4aa8-8d6a-0e882e8c2887-kube-api-access-5ddcv" (OuterVolumeSpecName: "kube-api-access-5ddcv") pod "7203cd75-4cd1-4aa8-8d6a-0e882e8c2887" (UID: "7203cd75-4cd1-4aa8-8d6a-0e882e8c2887"). InnerVolumeSpecName "kube-api-access-5ddcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:22 crc kubenswrapper[4706]: I1125 12:14:22.425486 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ddcv\" (UniqueName: \"kubernetes.io/projected/7203cd75-4cd1-4aa8-8d6a-0e882e8c2887-kube-api-access-5ddcv\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:22 crc kubenswrapper[4706]: I1125 12:14:22.425538 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7203cd75-4cd1-4aa8-8d6a-0e882e8c2887-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:22 crc kubenswrapper[4706]: I1125 12:14:22.446211 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7203cd75-4cd1-4aa8-8d6a-0e882e8c2887-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7203cd75-4cd1-4aa8-8d6a-0e882e8c2887" (UID: "7203cd75-4cd1-4aa8-8d6a-0e882e8c2887"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:14:22 crc kubenswrapper[4706]: I1125 12:14:22.527782 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7203cd75-4cd1-4aa8-8d6a-0e882e8c2887-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:22 crc kubenswrapper[4706]: I1125 12:14:22.854044 4706 generic.go:334] "Generic (PLEG): container finished" podID="7203cd75-4cd1-4aa8-8d6a-0e882e8c2887" containerID="8c14d18951a981cb771f315932577bfe10e73cee432cbf13e9e51a9c50f9f677" exitCode=0 Nov 25 12:14:22 crc kubenswrapper[4706]: I1125 12:14:22.854134 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgrbj" event={"ID":"7203cd75-4cd1-4aa8-8d6a-0e882e8c2887","Type":"ContainerDied","Data":"8c14d18951a981cb771f315932577bfe10e73cee432cbf13e9e51a9c50f9f677"} Nov 25 12:14:22 crc kubenswrapper[4706]: I1125 12:14:22.854453 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgrbj" event={"ID":"7203cd75-4cd1-4aa8-8d6a-0e882e8c2887","Type":"ContainerDied","Data":"8d4458bc5bf0f632649b7e20f1440ae9154fef7d71956ebbf81fb0d277509229"} Nov 25 12:14:22 crc kubenswrapper[4706]: I1125 12:14:22.854479 4706 scope.go:117] "RemoveContainer" containerID="8c14d18951a981cb771f315932577bfe10e73cee432cbf13e9e51a9c50f9f677" Nov 25 12:14:22 crc kubenswrapper[4706]: I1125 12:14:22.854166 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sgrbj" Nov 25 12:14:22 crc kubenswrapper[4706]: I1125 12:14:22.887171 4706 scope.go:117] "RemoveContainer" containerID="5759752f0ed5f292625659bf9bac06ba658cdc141adaf579ec4c347ac1540735" Nov 25 12:14:22 crc kubenswrapper[4706]: I1125 12:14:22.893318 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sgrbj"] Nov 25 12:14:22 crc kubenswrapper[4706]: I1125 12:14:22.902336 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sgrbj"] Nov 25 12:14:22 crc kubenswrapper[4706]: I1125 12:14:22.910701 4706 scope.go:117] "RemoveContainer" containerID="a0c808c9e534f11af864fd6af549d6ed8d3483c2765ebb535316c3d3147eaa49" Nov 25 12:14:22 crc kubenswrapper[4706]: I1125 12:14:22.955518 4706 scope.go:117] "RemoveContainer" containerID="8c14d18951a981cb771f315932577bfe10e73cee432cbf13e9e51a9c50f9f677" Nov 25 12:14:22 crc kubenswrapper[4706]: E1125 12:14:22.955970 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c14d18951a981cb771f315932577bfe10e73cee432cbf13e9e51a9c50f9f677\": container with ID starting with 8c14d18951a981cb771f315932577bfe10e73cee432cbf13e9e51a9c50f9f677 not found: ID does not exist" containerID="8c14d18951a981cb771f315932577bfe10e73cee432cbf13e9e51a9c50f9f677" Nov 25 12:14:22 crc kubenswrapper[4706]: I1125 12:14:22.956010 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c14d18951a981cb771f315932577bfe10e73cee432cbf13e9e51a9c50f9f677"} err="failed to get container status \"8c14d18951a981cb771f315932577bfe10e73cee432cbf13e9e51a9c50f9f677\": rpc error: code = NotFound desc = could not find container \"8c14d18951a981cb771f315932577bfe10e73cee432cbf13e9e51a9c50f9f677\": container with ID starting with 8c14d18951a981cb771f315932577bfe10e73cee432cbf13e9e51a9c50f9f677 not found: ID does not exist" Nov 25 12:14:22 crc kubenswrapper[4706]: I1125 12:14:22.956037 4706 scope.go:117] "RemoveContainer" containerID="5759752f0ed5f292625659bf9bac06ba658cdc141adaf579ec4c347ac1540735" Nov 25 12:14:22 crc kubenswrapper[4706]: E1125 12:14:22.956579 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5759752f0ed5f292625659bf9bac06ba658cdc141adaf579ec4c347ac1540735\": container with ID starting with 5759752f0ed5f292625659bf9bac06ba658cdc141adaf579ec4c347ac1540735 not found: ID does not exist" containerID="5759752f0ed5f292625659bf9bac06ba658cdc141adaf579ec4c347ac1540735" Nov 25 12:14:22 crc kubenswrapper[4706]: I1125 12:14:22.956621 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5759752f0ed5f292625659bf9bac06ba658cdc141adaf579ec4c347ac1540735"} err="failed to get container status \"5759752f0ed5f292625659bf9bac06ba658cdc141adaf579ec4c347ac1540735\": rpc error: code = NotFound desc = could not find container \"5759752f0ed5f292625659bf9bac06ba658cdc141adaf579ec4c347ac1540735\": container with ID starting with 5759752f0ed5f292625659bf9bac06ba658cdc141adaf579ec4c347ac1540735 not found: ID does not exist" Nov 25 12:14:22 crc kubenswrapper[4706]: I1125 12:14:22.956649 4706 scope.go:117] "RemoveContainer" containerID="a0c808c9e534f11af864fd6af549d6ed8d3483c2765ebb535316c3d3147eaa49" Nov 25 12:14:22 crc kubenswrapper[4706]: E1125 12:14:22.956922 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0c808c9e534f11af864fd6af549d6ed8d3483c2765ebb535316c3d3147eaa49\": container with ID starting with a0c808c9e534f11af864fd6af549d6ed8d3483c2765ebb535316c3d3147eaa49 not found: ID does not exist" containerID="a0c808c9e534f11af864fd6af549d6ed8d3483c2765ebb535316c3d3147eaa49" Nov 25 12:14:22 crc kubenswrapper[4706]: I1125 12:14:22.956947 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0c808c9e534f11af864fd6af549d6ed8d3483c2765ebb535316c3d3147eaa49"} err="failed to get container status \"a0c808c9e534f11af864fd6af549d6ed8d3483c2765ebb535316c3d3147eaa49\": rpc error: code = NotFound desc = could not find container \"a0c808c9e534f11af864fd6af549d6ed8d3483c2765ebb535316c3d3147eaa49\": container with ID starting with a0c808c9e534f11af864fd6af549d6ed8d3483c2765ebb535316c3d3147eaa49 not found: ID does not exist" Nov 25 12:14:23 crc kubenswrapper[4706]: I1125 12:14:23.933977 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7203cd75-4cd1-4aa8-8d6a-0e882e8c2887" path="/var/lib/kubelet/pods/7203cd75-4cd1-4aa8-8d6a-0e882e8c2887/volumes" Nov 25 12:14:31 crc kubenswrapper[4706]: I1125 12:14:31.124867 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:14:31 crc kubenswrapper[4706]: I1125 12:14:31.125456 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:14:31 crc kubenswrapper[4706]: I1125 12:14:31.125515 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" Nov 25 12:14:31 crc kubenswrapper[4706]: I1125 12:14:31.126425 4706 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"02f070302d64fff80ca8166389e9c6c4cebd1119d10a5d1848c1ade4b03a9e54"} pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 12:14:31 crc kubenswrapper[4706]: I1125 12:14:31.126486 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" containerID="cri-o://02f070302d64fff80ca8166389e9c6c4cebd1119d10a5d1848c1ade4b03a9e54" gracePeriod=600 Nov 25 12:14:31 crc kubenswrapper[4706]: E1125 12:14:31.264241 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:14:31 crc kubenswrapper[4706]: I1125 12:14:31.942521 4706 generic.go:334] "Generic (PLEG): container finished" podID="0930887a-320c-4506-8c9c-f94d6d64516a" containerID="02f070302d64fff80ca8166389e9c6c4cebd1119d10a5d1848c1ade4b03a9e54" exitCode=0 Nov 25 12:14:31 crc kubenswrapper[4706]: I1125 12:14:31.942569 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" event={"ID":"0930887a-320c-4506-8c9c-f94d6d64516a","Type":"ContainerDied","Data":"02f070302d64fff80ca8166389e9c6c4cebd1119d10a5d1848c1ade4b03a9e54"} Nov 25 12:14:31 crc kubenswrapper[4706]: I1125 12:14:31.942609 4706 scope.go:117] "RemoveContainer" containerID="c3decbb72f251ff0268699ac4622382fd9d08b45caec2fd0b673ab3aae749803" Nov 25 12:14:31 crc kubenswrapper[4706]: I1125 12:14:31.943288 4706 scope.go:117] "RemoveContainer" containerID="02f070302d64fff80ca8166389e9c6c4cebd1119d10a5d1848c1ade4b03a9e54" Nov 25 12:14:31 crc kubenswrapper[4706]: E1125 12:14:31.943592 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:14:43 crc kubenswrapper[4706]: I1125 12:14:43.922818 4706 scope.go:117] "RemoveContainer" containerID="02f070302d64fff80ca8166389e9c6c4cebd1119d10a5d1848c1ade4b03a9e54" Nov 25 12:14:43 crc kubenswrapper[4706]: E1125 12:14:43.923733 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:14:49 crc kubenswrapper[4706]: I1125 12:14:49.121636 4706 generic.go:334] "Generic (PLEG): container finished" podID="97dd7a8b-3605-49a2-ad4d-72dd946605aa" containerID="3678c9e3f4787f1f8f052762730c23ae83ae8a34bbbadb14772b928471a05c86" exitCode=0 Nov 25 12:14:49 crc kubenswrapper[4706]: I1125 12:14:49.121741 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kxnq" event={"ID":"97dd7a8b-3605-49a2-ad4d-72dd946605aa","Type":"ContainerDied","Data":"3678c9e3f4787f1f8f052762730c23ae83ae8a34bbbadb14772b928471a05c86"} Nov 25 12:14:50 crc kubenswrapper[4706]: I1125 12:14:50.540591 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kxnq" Nov 25 12:14:50 crc kubenswrapper[4706]: I1125 12:14:50.620508 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwww4\" (UniqueName: \"kubernetes.io/projected/97dd7a8b-3605-49a2-ad4d-72dd946605aa-kube-api-access-nwww4\") pod \"97dd7a8b-3605-49a2-ad4d-72dd946605aa\" (UID: \"97dd7a8b-3605-49a2-ad4d-72dd946605aa\") " Nov 25 12:14:50 crc kubenswrapper[4706]: I1125 12:14:50.620679 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97dd7a8b-3605-49a2-ad4d-72dd946605aa-ovn-combined-ca-bundle\") pod \"97dd7a8b-3605-49a2-ad4d-72dd946605aa\" (UID: \"97dd7a8b-3605-49a2-ad4d-72dd946605aa\") " Nov 25 12:14:50 crc kubenswrapper[4706]: I1125 12:14:50.620698 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97dd7a8b-3605-49a2-ad4d-72dd946605aa-ssh-key\") pod \"97dd7a8b-3605-49a2-ad4d-72dd946605aa\" (UID: \"97dd7a8b-3605-49a2-ad4d-72dd946605aa\") " Nov 25 12:14:50 crc kubenswrapper[4706]: I1125 12:14:50.620722 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97dd7a8b-3605-49a2-ad4d-72dd946605aa-inventory\") pod \"97dd7a8b-3605-49a2-ad4d-72dd946605aa\" (UID: \"97dd7a8b-3605-49a2-ad4d-72dd946605aa\") " Nov 25 12:14:50 crc kubenswrapper[4706]: I1125 12:14:50.620788 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/97dd7a8b-3605-49a2-ad4d-72dd946605aa-ovncontroller-config-0\") pod \"97dd7a8b-3605-49a2-ad4d-72dd946605aa\" (UID: \"97dd7a8b-3605-49a2-ad4d-72dd946605aa\") " Nov 25 12:14:50 crc kubenswrapper[4706]: I1125 12:14:50.627319 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97dd7a8b-3605-49a2-ad4d-72dd946605aa-kube-api-access-nwww4" (OuterVolumeSpecName: "kube-api-access-nwww4") pod "97dd7a8b-3605-49a2-ad4d-72dd946605aa" (UID: "97dd7a8b-3605-49a2-ad4d-72dd946605aa"). InnerVolumeSpecName "kube-api-access-nwww4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:50 crc kubenswrapper[4706]: I1125 12:14:50.628525 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97dd7a8b-3605-49a2-ad4d-72dd946605aa-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "97dd7a8b-3605-49a2-ad4d-72dd946605aa" (UID: "97dd7a8b-3605-49a2-ad4d-72dd946605aa"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:50 crc kubenswrapper[4706]: I1125 12:14:50.649110 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97dd7a8b-3605-49a2-ad4d-72dd946605aa-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "97dd7a8b-3605-49a2-ad4d-72dd946605aa" (UID: "97dd7a8b-3605-49a2-ad4d-72dd946605aa"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:50 crc kubenswrapper[4706]: I1125 12:14:50.653814 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97dd7a8b-3605-49a2-ad4d-72dd946605aa-inventory" (OuterVolumeSpecName: "inventory") pod "97dd7a8b-3605-49a2-ad4d-72dd946605aa" (UID: "97dd7a8b-3605-49a2-ad4d-72dd946605aa"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:50 crc kubenswrapper[4706]: I1125 12:14:50.654023 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97dd7a8b-3605-49a2-ad4d-72dd946605aa-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "97dd7a8b-3605-49a2-ad4d-72dd946605aa" (UID: "97dd7a8b-3605-49a2-ad4d-72dd946605aa"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:50 crc kubenswrapper[4706]: I1125 12:14:50.724199 4706 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97dd7a8b-3605-49a2-ad4d-72dd946605aa-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:50 crc kubenswrapper[4706]: I1125 12:14:50.724270 4706 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97dd7a8b-3605-49a2-ad4d-72dd946605aa-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:50 crc kubenswrapper[4706]: I1125 12:14:50.724281 4706 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97dd7a8b-3605-49a2-ad4d-72dd946605aa-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:50 crc kubenswrapper[4706]: I1125 12:14:50.724292 4706 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/97dd7a8b-3605-49a2-ad4d-72dd946605aa-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:50 crc kubenswrapper[4706]: I1125 12:14:50.724318 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwww4\" (UniqueName: \"kubernetes.io/projected/97dd7a8b-3605-49a2-ad4d-72dd946605aa-kube-api-access-nwww4\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.144890 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kxnq" event={"ID":"97dd7a8b-3605-49a2-ad4d-72dd946605aa","Type":"ContainerDied","Data":"98d8dac7e732de1010aaec61c06a0d0c5be0856b238155243442cf1a84ae3500"} Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.144949 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kxnq" Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.144958 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98d8dac7e732de1010aaec61c06a0d0c5be0856b238155243442cf1a84ae3500" Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.262785 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk"] Nov 25 12:14:51 crc kubenswrapper[4706]: E1125 12:14:51.263151 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97dd7a8b-3605-49a2-ad4d-72dd946605aa" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.263169 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="97dd7a8b-3605-49a2-ad4d-72dd946605aa" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 25 12:14:51 crc kubenswrapper[4706]: E1125 12:14:51.263325 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7203cd75-4cd1-4aa8-8d6a-0e882e8c2887" containerName="registry-server" Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.263337 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="7203cd75-4cd1-4aa8-8d6a-0e882e8c2887" containerName="registry-server" Nov 25 12:14:51 crc kubenswrapper[4706]: E1125 12:14:51.263346 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7203cd75-4cd1-4aa8-8d6a-0e882e8c2887" containerName="extract-utilities" Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.263353 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="7203cd75-4cd1-4aa8-8d6a-0e882e8c2887" containerName="extract-utilities" Nov 25 12:14:51 crc kubenswrapper[4706]: E1125 12:14:51.263379 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7203cd75-4cd1-4aa8-8d6a-0e882e8c2887" containerName="extract-content" Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.263385 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="7203cd75-4cd1-4aa8-8d6a-0e882e8c2887" containerName="extract-content" Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.263558 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="7203cd75-4cd1-4aa8-8d6a-0e882e8c2887" containerName="registry-server" Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.263568 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="97dd7a8b-3605-49a2-ad4d-72dd946605aa" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.264279 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk" Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.268413 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.268741 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r8qqp" Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.268938 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.269098 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.269253 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.269425 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.321486 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk"] Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.341603 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drcdf\" (UniqueName: \"kubernetes.io/projected/5686661c-4510-41ab-aed3-7ab5fa576b60-kube-api-access-drcdf\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk\" (UID: \"5686661c-4510-41ab-aed3-7ab5fa576b60\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk" Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.341685 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5686661c-4510-41ab-aed3-7ab5fa576b60-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk\" (UID: \"5686661c-4510-41ab-aed3-7ab5fa576b60\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk" Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.341734 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5686661c-4510-41ab-aed3-7ab5fa576b60-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk\" (UID: \"5686661c-4510-41ab-aed3-7ab5fa576b60\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk" Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.341760 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5686661c-4510-41ab-aed3-7ab5fa576b60-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk\" (UID: \"5686661c-4510-41ab-aed3-7ab5fa576b60\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk" Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.341781 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5686661c-4510-41ab-aed3-7ab5fa576b60-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk\" (UID: \"5686661c-4510-41ab-aed3-7ab5fa576b60\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk" Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.341875 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5686661c-4510-41ab-aed3-7ab5fa576b60-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk\" (UID: \"5686661c-4510-41ab-aed3-7ab5fa576b60\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk" Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.444522 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5686661c-4510-41ab-aed3-7ab5fa576b60-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk\" (UID: \"5686661c-4510-41ab-aed3-7ab5fa576b60\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk" Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.444630 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5686661c-4510-41ab-aed3-7ab5fa576b60-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk\" (UID: \"5686661c-4510-41ab-aed3-7ab5fa576b60\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk" Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.444663 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5686661c-4510-41ab-aed3-7ab5fa576b60-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk\" (UID: \"5686661c-4510-41ab-aed3-7ab5fa576b60\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk" Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.444696 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5686661c-4510-41ab-aed3-7ab5fa576b60-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk\" (UID: \"5686661c-4510-41ab-aed3-7ab5fa576b60\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk" Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.444814 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5686661c-4510-41ab-aed3-7ab5fa576b60-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk\" (UID: \"5686661c-4510-41ab-aed3-7ab5fa576b60\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk" Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.444854 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drcdf\" (UniqueName: \"kubernetes.io/projected/5686661c-4510-41ab-aed3-7ab5fa576b60-kube-api-access-drcdf\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk\" (UID: \"5686661c-4510-41ab-aed3-7ab5fa576b60\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk" Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.450150 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5686661c-4510-41ab-aed3-7ab5fa576b60-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk\" (UID: \"5686661c-4510-41ab-aed3-7ab5fa576b60\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk" Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.452130 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5686661c-4510-41ab-aed3-7ab5fa576b60-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk\" (UID: \"5686661c-4510-41ab-aed3-7ab5fa576b60\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk" Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.452920 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5686661c-4510-41ab-aed3-7ab5fa576b60-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk\" (UID: \"5686661c-4510-41ab-aed3-7ab5fa576b60\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk" Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.454087 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5686661c-4510-41ab-aed3-7ab5fa576b60-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk\" (UID: \"5686661c-4510-41ab-aed3-7ab5fa576b60\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk" Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.464926 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5686661c-4510-41ab-aed3-7ab5fa576b60-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk\" (UID: \"5686661c-4510-41ab-aed3-7ab5fa576b60\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk" Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.466100 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drcdf\" (UniqueName: \"kubernetes.io/projected/5686661c-4510-41ab-aed3-7ab5fa576b60-kube-api-access-drcdf\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk\" (UID: \"5686661c-4510-41ab-aed3-7ab5fa576b60\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk" Nov 25 12:14:51 crc kubenswrapper[4706]: I1125 12:14:51.627079 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk" Nov 25 12:14:52 crc kubenswrapper[4706]: I1125 12:14:52.194590 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk"] Nov 25 12:14:53 crc kubenswrapper[4706]: I1125 12:14:53.165574 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk" event={"ID":"5686661c-4510-41ab-aed3-7ab5fa576b60","Type":"ContainerStarted","Data":"13510e9717f3103610ccacb1cb41e82b295d6cc8be8c12278bce0065cae66331"} Nov 25 12:14:53 crc kubenswrapper[4706]: I1125 12:14:53.165892 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk" event={"ID":"5686661c-4510-41ab-aed3-7ab5fa576b60","Type":"ContainerStarted","Data":"b0f5551ce5fc348beb5c8f3d0d95ed4a473d8d430699d4394fd509a880518666"} Nov 25 12:14:53 crc kubenswrapper[4706]: I1125 12:14:53.210277 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk" podStartSLOduration=1.779896163 podStartE2EDuration="2.21025347s" podCreationTimestamp="2025-11-25 12:14:51 +0000 UTC" firstStartedPulling="2025-11-25 12:14:52.196232037 +0000 UTC m=+2301.110789418" lastFinishedPulling="2025-11-25 12:14:52.626589354 +0000 UTC m=+2301.541146725" observedRunningTime="2025-11-25 12:14:53.187475659 +0000 UTC m=+2302.102033050" watchObservedRunningTime="2025-11-25 12:14:53.21025347 +0000 UTC m=+2302.124810861" Nov 25 12:14:56 crc kubenswrapper[4706]: I1125 12:14:56.922894 4706 scope.go:117] "RemoveContainer" containerID="02f070302d64fff80ca8166389e9c6c4cebd1119d10a5d1848c1ade4b03a9e54" Nov 25 12:14:56 crc kubenswrapper[4706]: E1125 12:14:56.923734 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:15:00 crc kubenswrapper[4706]: I1125 12:15:00.135195 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401215-kq4f8"] Nov 25 12:15:00 crc kubenswrapper[4706]: I1125 12:15:00.136956 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401215-kq4f8" Nov 25 12:15:00 crc kubenswrapper[4706]: I1125 12:15:00.139432 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 12:15:00 crc kubenswrapper[4706]: I1125 12:15:00.139781 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 12:15:00 crc kubenswrapper[4706]: I1125 12:15:00.153598 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401215-kq4f8"] Nov 25 12:15:00 crc kubenswrapper[4706]: I1125 12:15:00.191945 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7c1ff73-5f35-4493-bec3-42f26c2112bb-config-volume\") pod \"collect-profiles-29401215-kq4f8\" (UID: \"c7c1ff73-5f35-4493-bec3-42f26c2112bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401215-kq4f8" Nov 25 12:15:00 crc kubenswrapper[4706]: I1125 12:15:00.192069 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rcmv\" (UniqueName: \"kubernetes.io/projected/c7c1ff73-5f35-4493-bec3-42f26c2112bb-kube-api-access-4rcmv\") pod \"collect-profiles-29401215-kq4f8\" (UID: \"c7c1ff73-5f35-4493-bec3-42f26c2112bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401215-kq4f8" Nov 25 12:15:00 crc kubenswrapper[4706]: I1125 12:15:00.192098 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7c1ff73-5f35-4493-bec3-42f26c2112bb-secret-volume\") pod \"collect-profiles-29401215-kq4f8\" (UID: \"c7c1ff73-5f35-4493-bec3-42f26c2112bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401215-kq4f8" Nov 25 12:15:00 crc kubenswrapper[4706]: I1125 12:15:00.293910 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rcmv\" (UniqueName: \"kubernetes.io/projected/c7c1ff73-5f35-4493-bec3-42f26c2112bb-kube-api-access-4rcmv\") pod \"collect-profiles-29401215-kq4f8\" (UID: \"c7c1ff73-5f35-4493-bec3-42f26c2112bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401215-kq4f8" Nov 25 12:15:00 crc kubenswrapper[4706]: I1125 12:15:00.293971 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7c1ff73-5f35-4493-bec3-42f26c2112bb-secret-volume\") pod \"collect-profiles-29401215-kq4f8\" (UID: \"c7c1ff73-5f35-4493-bec3-42f26c2112bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401215-kq4f8" Nov 25 12:15:00 crc kubenswrapper[4706]: I1125 12:15:00.294143 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7c1ff73-5f35-4493-bec3-42f26c2112bb-config-volume\") pod \"collect-profiles-29401215-kq4f8\" (UID: \"c7c1ff73-5f35-4493-bec3-42f26c2112bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401215-kq4f8" Nov 25 12:15:00 crc kubenswrapper[4706]: I1125 12:15:00.295282 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7c1ff73-5f35-4493-bec3-42f26c2112bb-config-volume\") pod \"collect-profiles-29401215-kq4f8\" (UID: \"c7c1ff73-5f35-4493-bec3-42f26c2112bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401215-kq4f8" Nov 25 12:15:00 crc kubenswrapper[4706]: I1125 12:15:00.304009 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7c1ff73-5f35-4493-bec3-42f26c2112bb-secret-volume\") pod \"collect-profiles-29401215-kq4f8\" (UID: \"c7c1ff73-5f35-4493-bec3-42f26c2112bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401215-kq4f8" Nov 25 12:15:00 crc kubenswrapper[4706]: I1125 12:15:00.315427 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rcmv\" (UniqueName: \"kubernetes.io/projected/c7c1ff73-5f35-4493-bec3-42f26c2112bb-kube-api-access-4rcmv\") pod \"collect-profiles-29401215-kq4f8\" (UID: \"c7c1ff73-5f35-4493-bec3-42f26c2112bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401215-kq4f8" Nov 25 12:15:00 crc kubenswrapper[4706]: I1125 12:15:00.458727 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401215-kq4f8" Nov 25 12:15:00 crc kubenswrapper[4706]: I1125 12:15:00.920521 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401215-kq4f8"] Nov 25 12:15:01 crc kubenswrapper[4706]: I1125 12:15:01.244177 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401215-kq4f8" event={"ID":"c7c1ff73-5f35-4493-bec3-42f26c2112bb","Type":"ContainerStarted","Data":"02674ee2a183cd83356c4b3ff43a9bbf66c77f226e6ca0554780779507dc198c"} Nov 25 12:15:01 crc kubenswrapper[4706]: I1125 12:15:01.244229 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401215-kq4f8" event={"ID":"c7c1ff73-5f35-4493-bec3-42f26c2112bb","Type":"ContainerStarted","Data":"d1ad627258cde44090c6850b5e5692d7a2d728fb1efd3182b95ff5facb65c46c"} Nov 25 12:15:01 crc kubenswrapper[4706]: I1125 12:15:01.259476 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29401215-kq4f8" podStartSLOduration=1.259453986 podStartE2EDuration="1.259453986s" podCreationTimestamp="2025-11-25 12:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:15:01.258263865 +0000 UTC m=+2310.172821256" watchObservedRunningTime="2025-11-25 12:15:01.259453986 +0000 UTC m=+2310.174011367" Nov 25 12:15:02 crc kubenswrapper[4706]: I1125 12:15:02.281972 4706 generic.go:334] "Generic (PLEG): container finished" podID="c7c1ff73-5f35-4493-bec3-42f26c2112bb" containerID="02674ee2a183cd83356c4b3ff43a9bbf66c77f226e6ca0554780779507dc198c" exitCode=0 Nov 25 12:15:02 crc kubenswrapper[4706]: I1125 12:15:02.282100 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401215-kq4f8" event={"ID":"c7c1ff73-5f35-4493-bec3-42f26c2112bb","Type":"ContainerDied","Data":"02674ee2a183cd83356c4b3ff43a9bbf66c77f226e6ca0554780779507dc198c"} Nov 25 12:15:03 crc kubenswrapper[4706]: I1125 12:15:03.632440 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401215-kq4f8" Nov 25 12:15:03 crc kubenswrapper[4706]: I1125 12:15:03.664221 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rcmv\" (UniqueName: \"kubernetes.io/projected/c7c1ff73-5f35-4493-bec3-42f26c2112bb-kube-api-access-4rcmv\") pod \"c7c1ff73-5f35-4493-bec3-42f26c2112bb\" (UID: \"c7c1ff73-5f35-4493-bec3-42f26c2112bb\") " Nov 25 12:15:03 crc kubenswrapper[4706]: I1125 12:15:03.664333 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7c1ff73-5f35-4493-bec3-42f26c2112bb-config-volume\") pod \"c7c1ff73-5f35-4493-bec3-42f26c2112bb\" (UID: \"c7c1ff73-5f35-4493-bec3-42f26c2112bb\") " Nov 25 12:15:03 crc kubenswrapper[4706]: I1125 12:15:03.664412 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7c1ff73-5f35-4493-bec3-42f26c2112bb-secret-volume\") pod \"c7c1ff73-5f35-4493-bec3-42f26c2112bb\" (UID: \"c7c1ff73-5f35-4493-bec3-42f26c2112bb\") " Nov 25 12:15:03 crc kubenswrapper[4706]: I1125 12:15:03.665588 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7c1ff73-5f35-4493-bec3-42f26c2112bb-config-volume" (OuterVolumeSpecName: "config-volume") pod "c7c1ff73-5f35-4493-bec3-42f26c2112bb" (UID: "c7c1ff73-5f35-4493-bec3-42f26c2112bb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:15:03 crc kubenswrapper[4706]: I1125 12:15:03.671326 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7c1ff73-5f35-4493-bec3-42f26c2112bb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c7c1ff73-5f35-4493-bec3-42f26c2112bb" (UID: "c7c1ff73-5f35-4493-bec3-42f26c2112bb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:15:03 crc kubenswrapper[4706]: I1125 12:15:03.671616 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7c1ff73-5f35-4493-bec3-42f26c2112bb-kube-api-access-4rcmv" (OuterVolumeSpecName: "kube-api-access-4rcmv") pod "c7c1ff73-5f35-4493-bec3-42f26c2112bb" (UID: "c7c1ff73-5f35-4493-bec3-42f26c2112bb"). InnerVolumeSpecName "kube-api-access-4rcmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:15:03 crc kubenswrapper[4706]: I1125 12:15:03.766986 4706 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7c1ff73-5f35-4493-bec3-42f26c2112bb-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 12:15:03 crc kubenswrapper[4706]: I1125 12:15:03.767425 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rcmv\" (UniqueName: \"kubernetes.io/projected/c7c1ff73-5f35-4493-bec3-42f26c2112bb-kube-api-access-4rcmv\") on node \"crc\" DevicePath \"\"" Nov 25 12:15:03 crc kubenswrapper[4706]: I1125 12:15:03.767439 4706 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7c1ff73-5f35-4493-bec3-42f26c2112bb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 12:15:04 crc kubenswrapper[4706]: I1125 12:15:04.304825 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401215-kq4f8" event={"ID":"c7c1ff73-5f35-4493-bec3-42f26c2112bb","Type":"ContainerDied","Data":"d1ad627258cde44090c6850b5e5692d7a2d728fb1efd3182b95ff5facb65c46c"} Nov 25 12:15:04 crc kubenswrapper[4706]: I1125 12:15:04.304887 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1ad627258cde44090c6850b5e5692d7a2d728fb1efd3182b95ff5facb65c46c" Nov 25 12:15:04 crc kubenswrapper[4706]: I1125 12:15:04.304928 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401215-kq4f8" Nov 25 12:15:04 crc kubenswrapper[4706]: I1125 12:15:04.349958 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401170-s4f7r"] Nov 25 12:15:04 crc kubenswrapper[4706]: I1125 12:15:04.363523 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401170-s4f7r"] Nov 25 12:15:05 crc kubenswrapper[4706]: I1125 12:15:05.941568 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51a87a4e-3d58-48e0-b455-292aa206e149" path="/var/lib/kubelet/pods/51a87a4e-3d58-48e0-b455-292aa206e149/volumes" Nov 25 12:15:09 crc kubenswrapper[4706]: I1125 12:15:09.922284 4706 scope.go:117] "RemoveContainer" containerID="02f070302d64fff80ca8166389e9c6c4cebd1119d10a5d1848c1ade4b03a9e54" Nov 25 12:15:09 crc kubenswrapper[4706]: E1125 12:15:09.923160 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:15:24 crc kubenswrapper[4706]: I1125 12:15:24.922747 4706 scope.go:117] "RemoveContainer" containerID="02f070302d64fff80ca8166389e9c6c4cebd1119d10a5d1848c1ade4b03a9e54" Nov 25 12:15:24 crc kubenswrapper[4706]: E1125 12:15:24.923489 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:15:37 crc kubenswrapper[4706]: I1125 12:15:37.020629 4706 scope.go:117] "RemoveContainer" containerID="2c5dfa9cb2ce5d6cbb777e4b005be38591922269782460a54c83a0a317b49885" Nov 25 12:15:38 crc kubenswrapper[4706]: I1125 12:15:38.922710 4706 scope.go:117] "RemoveContainer" containerID="02f070302d64fff80ca8166389e9c6c4cebd1119d10a5d1848c1ade4b03a9e54" Nov 25 12:15:38 crc kubenswrapper[4706]: E1125 12:15:38.923278 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:15:40 crc kubenswrapper[4706]: I1125 12:15:40.649323 4706 generic.go:334] "Generic (PLEG): container finished" podID="5686661c-4510-41ab-aed3-7ab5fa576b60" containerID="13510e9717f3103610ccacb1cb41e82b295d6cc8be8c12278bce0065cae66331" exitCode=0 Nov 25 12:15:40 crc kubenswrapper[4706]: I1125 12:15:40.649359 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk" event={"ID":"5686661c-4510-41ab-aed3-7ab5fa576b60","Type":"ContainerDied","Data":"13510e9717f3103610ccacb1cb41e82b295d6cc8be8c12278bce0065cae66331"} Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.063879 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk" Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.210668 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5686661c-4510-41ab-aed3-7ab5fa576b60-inventory\") pod \"5686661c-4510-41ab-aed3-7ab5fa576b60\" (UID: \"5686661c-4510-41ab-aed3-7ab5fa576b60\") " Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.210720 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drcdf\" (UniqueName: \"kubernetes.io/projected/5686661c-4510-41ab-aed3-7ab5fa576b60-kube-api-access-drcdf\") pod \"5686661c-4510-41ab-aed3-7ab5fa576b60\" (UID: \"5686661c-4510-41ab-aed3-7ab5fa576b60\") " Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.210845 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5686661c-4510-41ab-aed3-7ab5fa576b60-nova-metadata-neutron-config-0\") pod \"5686661c-4510-41ab-aed3-7ab5fa576b60\" (UID: \"5686661c-4510-41ab-aed3-7ab5fa576b60\") " Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.210953 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5686661c-4510-41ab-aed3-7ab5fa576b60-neutron-metadata-combined-ca-bundle\") pod \"5686661c-4510-41ab-aed3-7ab5fa576b60\" (UID: \"5686661c-4510-41ab-aed3-7ab5fa576b60\") " Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.210992 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5686661c-4510-41ab-aed3-7ab5fa576b60-neutron-ovn-metadata-agent-neutron-config-0\") pod \"5686661c-4510-41ab-aed3-7ab5fa576b60\" (UID: \"5686661c-4510-41ab-aed3-7ab5fa576b60\") " Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.211051 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5686661c-4510-41ab-aed3-7ab5fa576b60-ssh-key\") pod \"5686661c-4510-41ab-aed3-7ab5fa576b60\" (UID: \"5686661c-4510-41ab-aed3-7ab5fa576b60\") " Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.217138 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5686661c-4510-41ab-aed3-7ab5fa576b60-kube-api-access-drcdf" (OuterVolumeSpecName: "kube-api-access-drcdf") pod "5686661c-4510-41ab-aed3-7ab5fa576b60" (UID: "5686661c-4510-41ab-aed3-7ab5fa576b60"). InnerVolumeSpecName "kube-api-access-drcdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.223065 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5686661c-4510-41ab-aed3-7ab5fa576b60-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "5686661c-4510-41ab-aed3-7ab5fa576b60" (UID: "5686661c-4510-41ab-aed3-7ab5fa576b60"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.241459 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5686661c-4510-41ab-aed3-7ab5fa576b60-inventory" (OuterVolumeSpecName: "inventory") pod "5686661c-4510-41ab-aed3-7ab5fa576b60" (UID: "5686661c-4510-41ab-aed3-7ab5fa576b60"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.245418 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5686661c-4510-41ab-aed3-7ab5fa576b60-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "5686661c-4510-41ab-aed3-7ab5fa576b60" (UID: "5686661c-4510-41ab-aed3-7ab5fa576b60"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.252458 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5686661c-4510-41ab-aed3-7ab5fa576b60-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5686661c-4510-41ab-aed3-7ab5fa576b60" (UID: "5686661c-4510-41ab-aed3-7ab5fa576b60"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.255079 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5686661c-4510-41ab-aed3-7ab5fa576b60-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "5686661c-4510-41ab-aed3-7ab5fa576b60" (UID: "5686661c-4510-41ab-aed3-7ab5fa576b60"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.313660 4706 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5686661c-4510-41ab-aed3-7ab5fa576b60-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.313702 4706 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5686661c-4510-41ab-aed3-7ab5fa576b60-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.313717 4706 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5686661c-4510-41ab-aed3-7ab5fa576b60-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.313730 4706 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5686661c-4510-41ab-aed3-7ab5fa576b60-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.313745 4706 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5686661c-4510-41ab-aed3-7ab5fa576b60-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.313755 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drcdf\" (UniqueName: \"kubernetes.io/projected/5686661c-4510-41ab-aed3-7ab5fa576b60-kube-api-access-drcdf\") on node \"crc\" DevicePath \"\"" Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.669594 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk" event={"ID":"5686661c-4510-41ab-aed3-7ab5fa576b60","Type":"ContainerDied","Data":"b0f5551ce5fc348beb5c8f3d0d95ed4a473d8d430699d4394fd509a880518666"} Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.669637 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0f5551ce5fc348beb5c8f3d0d95ed4a473d8d430699d4394fd509a880518666" Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.669654 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk" Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.767963 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7"] Nov 25 12:15:42 crc kubenswrapper[4706]: E1125 12:15:42.768515 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5686661c-4510-41ab-aed3-7ab5fa576b60" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.768541 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="5686661c-4510-41ab-aed3-7ab5fa576b60" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 25 12:15:42 crc kubenswrapper[4706]: E1125 12:15:42.768563 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7c1ff73-5f35-4493-bec3-42f26c2112bb" containerName="collect-profiles" Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.768571 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7c1ff73-5f35-4493-bec3-42f26c2112bb" containerName="collect-profiles" Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.768885 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7c1ff73-5f35-4493-bec3-42f26c2112bb" containerName="collect-profiles" Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.768919 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="5686661c-4510-41ab-aed3-7ab5fa576b60" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.769779 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7" Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.774801 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.775006 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.775118 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.775258 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.775535 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r8qqp" Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.777223 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7"] Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.929106 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/90e48cbb-dd1b-466b-a72f-5e2913554a5b-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7\" (UID: \"90e48cbb-dd1b-466b-a72f-5e2913554a5b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7" Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.929213 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/90e48cbb-dd1b-466b-a72f-5e2913554a5b-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7\" (UID: \"90e48cbb-dd1b-466b-a72f-5e2913554a5b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7" Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.929243 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90e48cbb-dd1b-466b-a72f-5e2913554a5b-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7\" (UID: \"90e48cbb-dd1b-466b-a72f-5e2913554a5b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7" Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.929326 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90e48cbb-dd1b-466b-a72f-5e2913554a5b-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7\" (UID: \"90e48cbb-dd1b-466b-a72f-5e2913554a5b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7" Nov 25 12:15:42 crc kubenswrapper[4706]: I1125 12:15:42.929449 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prdxk\" (UniqueName: \"kubernetes.io/projected/90e48cbb-dd1b-466b-a72f-5e2913554a5b-kube-api-access-prdxk\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7\" (UID: \"90e48cbb-dd1b-466b-a72f-5e2913554a5b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7" Nov 25 12:15:43 crc kubenswrapper[4706]: I1125 12:15:43.031540 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prdxk\" (UniqueName: \"kubernetes.io/projected/90e48cbb-dd1b-466b-a72f-5e2913554a5b-kube-api-access-prdxk\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7\" (UID: \"90e48cbb-dd1b-466b-a72f-5e2913554a5b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7" Nov 25 12:15:43 crc kubenswrapper[4706]: I1125 12:15:43.031690 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/90e48cbb-dd1b-466b-a72f-5e2913554a5b-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7\" (UID: \"90e48cbb-dd1b-466b-a72f-5e2913554a5b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7" Nov 25 12:15:43 crc kubenswrapper[4706]: I1125 12:15:43.031781 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/90e48cbb-dd1b-466b-a72f-5e2913554a5b-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7\" (UID: \"90e48cbb-dd1b-466b-a72f-5e2913554a5b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7" Nov 25 12:15:43 crc kubenswrapper[4706]: I1125 12:15:43.031803 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90e48cbb-dd1b-466b-a72f-5e2913554a5b-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7\" (UID: \"90e48cbb-dd1b-466b-a72f-5e2913554a5b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7" Nov 25 12:15:43 crc kubenswrapper[4706]: I1125 12:15:43.031839 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90e48cbb-dd1b-466b-a72f-5e2913554a5b-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7\" (UID: \"90e48cbb-dd1b-466b-a72f-5e2913554a5b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7" Nov 25 12:15:43 crc kubenswrapper[4706]: I1125 12:15:43.036758 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90e48cbb-dd1b-466b-a72f-5e2913554a5b-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7\" (UID: \"90e48cbb-dd1b-466b-a72f-5e2913554a5b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7" Nov 25 12:15:43 crc kubenswrapper[4706]: I1125 12:15:43.036757 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/90e48cbb-dd1b-466b-a72f-5e2913554a5b-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7\" (UID: \"90e48cbb-dd1b-466b-a72f-5e2913554a5b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7" Nov 25 12:15:43 crc kubenswrapper[4706]: I1125 12:15:43.045113 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90e48cbb-dd1b-466b-a72f-5e2913554a5b-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7\" (UID: \"90e48cbb-dd1b-466b-a72f-5e2913554a5b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7" Nov 25 12:15:43 crc kubenswrapper[4706]: I1125 12:15:43.046027 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/90e48cbb-dd1b-466b-a72f-5e2913554a5b-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7\" (UID: \"90e48cbb-dd1b-466b-a72f-5e2913554a5b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7" Nov 25 12:15:43 crc kubenswrapper[4706]: I1125 12:15:43.054577 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prdxk\" (UniqueName: \"kubernetes.io/projected/90e48cbb-dd1b-466b-a72f-5e2913554a5b-kube-api-access-prdxk\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7\" (UID: \"90e48cbb-dd1b-466b-a72f-5e2913554a5b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7" Nov 25 12:15:43 crc kubenswrapper[4706]: I1125 12:15:43.138441 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7" Nov 25 12:15:43 crc kubenswrapper[4706]: I1125 12:15:43.726132 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7"] Nov 25 12:15:44 crc kubenswrapper[4706]: I1125 12:15:44.694732 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7" event={"ID":"90e48cbb-dd1b-466b-a72f-5e2913554a5b","Type":"ContainerStarted","Data":"98a76c1285688df6cea159f13ce5ebf4ca4ba08f0159f5bc97440e0e5c9053b5"} Nov 25 12:15:44 crc kubenswrapper[4706]: I1125 12:15:44.694806 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7" event={"ID":"90e48cbb-dd1b-466b-a72f-5e2913554a5b","Type":"ContainerStarted","Data":"54c77604ed1507c6caba8f48343f0a0ad5c1b55cf349e40ce6f3248a35da01aa"} Nov 25 12:15:44 crc kubenswrapper[4706]: I1125 12:15:44.720107 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7" podStartSLOduration=2.2896934939999998 podStartE2EDuration="2.720083601s" podCreationTimestamp="2025-11-25 12:15:42 +0000 UTC" firstStartedPulling="2025-11-25 12:15:43.735332255 +0000 UTC m=+2352.649889636" lastFinishedPulling="2025-11-25 12:15:44.165722362 +0000 UTC m=+2353.080279743" observedRunningTime="2025-11-25 12:15:44.711575635 +0000 UTC m=+2353.626133026" watchObservedRunningTime="2025-11-25 12:15:44.720083601 +0000 UTC m=+2353.634641002" Nov 25 12:15:52 crc kubenswrapper[4706]: I1125 12:15:52.922217 4706 scope.go:117] "RemoveContainer" containerID="02f070302d64fff80ca8166389e9c6c4cebd1119d10a5d1848c1ade4b03a9e54" Nov 25 12:15:52 crc kubenswrapper[4706]: E1125 12:15:52.922959 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:16:05 crc kubenswrapper[4706]: I1125 12:16:05.922185 4706 scope.go:117] "RemoveContainer" containerID="02f070302d64fff80ca8166389e9c6c4cebd1119d10a5d1848c1ade4b03a9e54" Nov 25 12:16:05 crc kubenswrapper[4706]: E1125 12:16:05.923253 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:16:16 crc kubenswrapper[4706]: I1125 12:16:16.923294 4706 scope.go:117] "RemoveContainer" containerID="02f070302d64fff80ca8166389e9c6c4cebd1119d10a5d1848c1ade4b03a9e54" Nov 25 12:16:16 crc kubenswrapper[4706]: E1125 12:16:16.925446 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:16:31 crc kubenswrapper[4706]: I1125 12:16:31.939692 4706 scope.go:117] "RemoveContainer" containerID="02f070302d64fff80ca8166389e9c6c4cebd1119d10a5d1848c1ade4b03a9e54" Nov 25 12:16:31 crc kubenswrapper[4706]: E1125 12:16:31.941817 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:16:46 crc kubenswrapper[4706]: I1125 12:16:46.922899 4706 scope.go:117] "RemoveContainer" containerID="02f070302d64fff80ca8166389e9c6c4cebd1119d10a5d1848c1ade4b03a9e54" Nov 25 12:16:46 crc kubenswrapper[4706]: E1125 12:16:46.923753 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:17:01 crc kubenswrapper[4706]: I1125 12:17:01.927913 4706 scope.go:117] "RemoveContainer" containerID="02f070302d64fff80ca8166389e9c6c4cebd1119d10a5d1848c1ade4b03a9e54" Nov 25 12:17:01 crc kubenswrapper[4706]: E1125 12:17:01.928665 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:17:13 crc kubenswrapper[4706]: I1125 12:17:13.922643 4706 scope.go:117] "RemoveContainer" containerID="02f070302d64fff80ca8166389e9c6c4cebd1119d10a5d1848c1ade4b03a9e54" Nov 25 12:17:13 crc kubenswrapper[4706]: E1125 12:17:13.923848 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:17:27 crc kubenswrapper[4706]: I1125 12:17:27.922873 4706 scope.go:117] "RemoveContainer" containerID="02f070302d64fff80ca8166389e9c6c4cebd1119d10a5d1848c1ade4b03a9e54" Nov 25 12:17:27 crc kubenswrapper[4706]: E1125 12:17:27.923930 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:17:40 crc kubenswrapper[4706]: I1125 12:17:40.923238 4706 scope.go:117] "RemoveContainer" containerID="02f070302d64fff80ca8166389e9c6c4cebd1119d10a5d1848c1ade4b03a9e54" Nov 25 12:17:40 crc kubenswrapper[4706]: E1125 12:17:40.924082 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:17:51 crc kubenswrapper[4706]: I1125 12:17:51.930804 4706 scope.go:117] "RemoveContainer" containerID="02f070302d64fff80ca8166389e9c6c4cebd1119d10a5d1848c1ade4b03a9e54" Nov 25 12:17:51 crc kubenswrapper[4706]: E1125 12:17:51.933609 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:18:02 crc kubenswrapper[4706]: I1125 12:18:02.922651 4706 scope.go:117] "RemoveContainer" containerID="02f070302d64fff80ca8166389e9c6c4cebd1119d10a5d1848c1ade4b03a9e54" Nov 25 12:18:02 crc kubenswrapper[4706]: E1125 12:18:02.923424 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:18:14 crc kubenswrapper[4706]: I1125 12:18:14.923896 4706 scope.go:117] "RemoveContainer" containerID="02f070302d64fff80ca8166389e9c6c4cebd1119d10a5d1848c1ade4b03a9e54" Nov 25 12:18:14 crc kubenswrapper[4706]: E1125 12:18:14.925153 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:18:28 crc kubenswrapper[4706]: I1125 12:18:28.922350 4706 scope.go:117] "RemoveContainer" containerID="02f070302d64fff80ca8166389e9c6c4cebd1119d10a5d1848c1ade4b03a9e54" Nov 25 12:18:28 crc kubenswrapper[4706]: E1125 12:18:28.924182 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:18:43 crc kubenswrapper[4706]: I1125 12:18:43.922468 4706 scope.go:117] "RemoveContainer" containerID="02f070302d64fff80ca8166389e9c6c4cebd1119d10a5d1848c1ade4b03a9e54" Nov 25 12:18:43 crc kubenswrapper[4706]: E1125 12:18:43.923259 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:18:58 crc kubenswrapper[4706]: I1125 12:18:58.922212 4706 scope.go:117] "RemoveContainer" containerID="02f070302d64fff80ca8166389e9c6c4cebd1119d10a5d1848c1ade4b03a9e54" Nov 25 12:18:58 crc kubenswrapper[4706]: E1125 12:18:58.923050 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:19:10 crc kubenswrapper[4706]: I1125 12:19:10.922704 4706 scope.go:117] "RemoveContainer" containerID="02f070302d64fff80ca8166389e9c6c4cebd1119d10a5d1848c1ade4b03a9e54" Nov 25 12:19:10 crc kubenswrapper[4706]: E1125 12:19:10.923527 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:19:24 crc kubenswrapper[4706]: I1125 12:19:24.923215 4706 scope.go:117] "RemoveContainer" containerID="02f070302d64fff80ca8166389e9c6c4cebd1119d10a5d1848c1ade4b03a9e54" Nov 25 12:19:24 crc kubenswrapper[4706]: E1125 12:19:24.924199 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:19:37 crc kubenswrapper[4706]: I1125 12:19:37.923349 4706 scope.go:117] "RemoveContainer" containerID="02f070302d64fff80ca8166389e9c6c4cebd1119d10a5d1848c1ade4b03a9e54" Nov 25 12:19:39 crc kubenswrapper[4706]: I1125 12:19:39.018115 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" event={"ID":"0930887a-320c-4506-8c9c-f94d6d64516a","Type":"ContainerStarted","Data":"b5c4a9b732ca8a1700914c594210046762bf19e4ab2732427a28f41c5179d529"} Nov 25 12:20:11 crc kubenswrapper[4706]: I1125 12:20:11.320411 4706 generic.go:334] "Generic (PLEG): container finished" podID="90e48cbb-dd1b-466b-a72f-5e2913554a5b" containerID="98a76c1285688df6cea159f13ce5ebf4ca4ba08f0159f5bc97440e0e5c9053b5" exitCode=0 Nov 25 12:20:11 crc kubenswrapper[4706]: I1125 12:20:11.320642 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7" event={"ID":"90e48cbb-dd1b-466b-a72f-5e2913554a5b","Type":"ContainerDied","Data":"98a76c1285688df6cea159f13ce5ebf4ca4ba08f0159f5bc97440e0e5c9053b5"} Nov 25 12:20:12 crc kubenswrapper[4706]: I1125 12:20:12.783091 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7" Nov 25 12:20:12 crc kubenswrapper[4706]: I1125 12:20:12.888567 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/90e48cbb-dd1b-466b-a72f-5e2913554a5b-libvirt-secret-0\") pod \"90e48cbb-dd1b-466b-a72f-5e2913554a5b\" (UID: \"90e48cbb-dd1b-466b-a72f-5e2913554a5b\") " Nov 25 12:20:12 crc kubenswrapper[4706]: I1125 12:20:12.888935 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90e48cbb-dd1b-466b-a72f-5e2913554a5b-inventory\") pod \"90e48cbb-dd1b-466b-a72f-5e2913554a5b\" (UID: \"90e48cbb-dd1b-466b-a72f-5e2913554a5b\") " Nov 25 12:20:12 crc kubenswrapper[4706]: I1125 12:20:12.889106 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90e48cbb-dd1b-466b-a72f-5e2913554a5b-libvirt-combined-ca-bundle\") pod \"90e48cbb-dd1b-466b-a72f-5e2913554a5b\" (UID: \"90e48cbb-dd1b-466b-a72f-5e2913554a5b\") " Nov 25 12:20:12 crc kubenswrapper[4706]: I1125 12:20:12.889238 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/90e48cbb-dd1b-466b-a72f-5e2913554a5b-ssh-key\") pod \"90e48cbb-dd1b-466b-a72f-5e2913554a5b\" (UID: \"90e48cbb-dd1b-466b-a72f-5e2913554a5b\") " Nov 25 12:20:12 crc kubenswrapper[4706]: I1125 12:20:12.889356 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prdxk\" (UniqueName: \"kubernetes.io/projected/90e48cbb-dd1b-466b-a72f-5e2913554a5b-kube-api-access-prdxk\") pod \"90e48cbb-dd1b-466b-a72f-5e2913554a5b\" (UID: \"90e48cbb-dd1b-466b-a72f-5e2913554a5b\") " Nov 25 12:20:12 crc kubenswrapper[4706]: I1125 12:20:12.895497 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90e48cbb-dd1b-466b-a72f-5e2913554a5b-kube-api-access-prdxk" (OuterVolumeSpecName: "kube-api-access-prdxk") pod "90e48cbb-dd1b-466b-a72f-5e2913554a5b" (UID: "90e48cbb-dd1b-466b-a72f-5e2913554a5b"). InnerVolumeSpecName "kube-api-access-prdxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:20:12 crc kubenswrapper[4706]: I1125 12:20:12.897733 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90e48cbb-dd1b-466b-a72f-5e2913554a5b-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "90e48cbb-dd1b-466b-a72f-5e2913554a5b" (UID: "90e48cbb-dd1b-466b-a72f-5e2913554a5b"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:20:12 crc kubenswrapper[4706]: I1125 12:20:12.919160 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90e48cbb-dd1b-466b-a72f-5e2913554a5b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "90e48cbb-dd1b-466b-a72f-5e2913554a5b" (UID: "90e48cbb-dd1b-466b-a72f-5e2913554a5b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:20:12 crc kubenswrapper[4706]: I1125 12:20:12.923279 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90e48cbb-dd1b-466b-a72f-5e2913554a5b-inventory" (OuterVolumeSpecName: "inventory") pod "90e48cbb-dd1b-466b-a72f-5e2913554a5b" (UID: "90e48cbb-dd1b-466b-a72f-5e2913554a5b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:20:12 crc kubenswrapper[4706]: I1125 12:20:12.923626 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90e48cbb-dd1b-466b-a72f-5e2913554a5b-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "90e48cbb-dd1b-466b-a72f-5e2913554a5b" (UID: "90e48cbb-dd1b-466b-a72f-5e2913554a5b"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:20:12 crc kubenswrapper[4706]: I1125 12:20:12.991730 4706 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/90e48cbb-dd1b-466b-a72f-5e2913554a5b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 12:20:12 crc kubenswrapper[4706]: I1125 12:20:12.992028 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prdxk\" (UniqueName: \"kubernetes.io/projected/90e48cbb-dd1b-466b-a72f-5e2913554a5b-kube-api-access-prdxk\") on node \"crc\" DevicePath \"\"" Nov 25 12:20:12 crc kubenswrapper[4706]: I1125 12:20:12.992237 4706 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/90e48cbb-dd1b-466b-a72f-5e2913554a5b-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:20:12 crc kubenswrapper[4706]: I1125 12:20:12.992274 4706 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90e48cbb-dd1b-466b-a72f-5e2913554a5b-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 12:20:12 crc kubenswrapper[4706]: I1125 12:20:12.992283 4706 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90e48cbb-dd1b-466b-a72f-5e2913554a5b-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.340585 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7" event={"ID":"90e48cbb-dd1b-466b-a72f-5e2913554a5b","Type":"ContainerDied","Data":"54c77604ed1507c6caba8f48343f0a0ad5c1b55cf349e40ce6f3248a35da01aa"} Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.340892 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54c77604ed1507c6caba8f48343f0a0ad5c1b55cf349e40ce6f3248a35da01aa" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.340612 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.434109 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7"] Nov 25 12:20:13 crc kubenswrapper[4706]: E1125 12:20:13.434678 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90e48cbb-dd1b-466b-a72f-5e2913554a5b" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.434704 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="90e48cbb-dd1b-466b-a72f-5e2913554a5b" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.434948 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="90e48cbb-dd1b-466b-a72f-5e2913554a5b" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.436284 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.440380 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.440508 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.440508 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.441453 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.446679 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7"] Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.451772 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r8qqp" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.452169 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.452409 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.603947 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67xt7\" (UID: \"f74a1106-ae1e-464c-a761-dc47c54c361c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.604043 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67xt7\" (UID: \"f74a1106-ae1e-464c-a761-dc47c54c361c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.604062 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67xt7\" (UID: \"f74a1106-ae1e-464c-a761-dc47c54c361c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.604120 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67xt7\" (UID: \"f74a1106-ae1e-464c-a761-dc47c54c361c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.604167 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plzkt\" (UniqueName: \"kubernetes.io/projected/f74a1106-ae1e-464c-a761-dc47c54c361c-kube-api-access-plzkt\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67xt7\" (UID: \"f74a1106-ae1e-464c-a761-dc47c54c361c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.604210 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f74a1106-ae1e-464c-a761-dc47c54c361c-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67xt7\" (UID: \"f74a1106-ae1e-464c-a761-dc47c54c361c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.604261 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67xt7\" (UID: \"f74a1106-ae1e-464c-a761-dc47c54c361c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.604347 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67xt7\" (UID: \"f74a1106-ae1e-464c-a761-dc47c54c361c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.604372 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67xt7\" (UID: \"f74a1106-ae1e-464c-a761-dc47c54c361c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.705892 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67xt7\" (UID: \"f74a1106-ae1e-464c-a761-dc47c54c361c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.705949 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plzkt\" (UniqueName: \"kubernetes.io/projected/f74a1106-ae1e-464c-a761-dc47c54c361c-kube-api-access-plzkt\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67xt7\" (UID: \"f74a1106-ae1e-464c-a761-dc47c54c361c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.705998 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f74a1106-ae1e-464c-a761-dc47c54c361c-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67xt7\" (UID: \"f74a1106-ae1e-464c-a761-dc47c54c361c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.706036 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67xt7\" (UID: \"f74a1106-ae1e-464c-a761-dc47c54c361c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.706095 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67xt7\" (UID: \"f74a1106-ae1e-464c-a761-dc47c54c361c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.706115 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67xt7\" (UID: \"f74a1106-ae1e-464c-a761-dc47c54c361c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.706146 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67xt7\" (UID: \"f74a1106-ae1e-464c-a761-dc47c54c361c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.706183 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67xt7\" (UID: \"f74a1106-ae1e-464c-a761-dc47c54c361c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.706201 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67xt7\" (UID: \"f74a1106-ae1e-464c-a761-dc47c54c361c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.707813 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f74a1106-ae1e-464c-a761-dc47c54c361c-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67xt7\" (UID: \"f74a1106-ae1e-464c-a761-dc47c54c361c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.710422 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67xt7\" (UID: \"f74a1106-ae1e-464c-a761-dc47c54c361c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.710493 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67xt7\" (UID: \"f74a1106-ae1e-464c-a761-dc47c54c361c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.711601 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67xt7\" (UID: \"f74a1106-ae1e-464c-a761-dc47c54c361c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.711829 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67xt7\" (UID: \"f74a1106-ae1e-464c-a761-dc47c54c361c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.712320 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67xt7\" (UID: \"f74a1106-ae1e-464c-a761-dc47c54c361c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.713392 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67xt7\" (UID: \"f74a1106-ae1e-464c-a761-dc47c54c361c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.714730 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67xt7\" (UID: \"f74a1106-ae1e-464c-a761-dc47c54c361c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.726194 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plzkt\" (UniqueName: \"kubernetes.io/projected/f74a1106-ae1e-464c-a761-dc47c54c361c-kube-api-access-plzkt\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67xt7\" (UID: \"f74a1106-ae1e-464c-a761-dc47c54c361c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7" Nov 25 12:20:13 crc kubenswrapper[4706]: I1125 12:20:13.765812 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7" Nov 25 12:20:14 crc kubenswrapper[4706]: I1125 12:20:14.275001 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7"] Nov 25 12:20:14 crc kubenswrapper[4706]: I1125 12:20:14.286442 4706 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 12:20:14 crc kubenswrapper[4706]: I1125 12:20:14.354713 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7" event={"ID":"f74a1106-ae1e-464c-a761-dc47c54c361c","Type":"ContainerStarted","Data":"23c90f1eb3ff17ee5c2f0162090790d9cbe7c633609226d90a2d9c274e5dc7b5"} Nov 25 12:20:16 crc kubenswrapper[4706]: I1125 12:20:16.374185 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7" event={"ID":"f74a1106-ae1e-464c-a761-dc47c54c361c","Type":"ContainerStarted","Data":"bb57adf16826ceafe48bf1dfdd4fd5c754adbab001fb047f0ff386c9014ed1f1"} Nov 25 12:20:16 crc kubenswrapper[4706]: I1125 12:20:16.399380 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7" podStartSLOduration=1.7647141689999999 podStartE2EDuration="3.399362705s" podCreationTimestamp="2025-11-25 12:20:13 +0000 UTC" firstStartedPulling="2025-11-25 12:20:14.286152856 +0000 UTC m=+2623.200710237" lastFinishedPulling="2025-11-25 12:20:15.920801392 +0000 UTC m=+2624.835358773" observedRunningTime="2025-11-25 12:20:16.396995295 +0000 UTC m=+2625.311552676" watchObservedRunningTime="2025-11-25 12:20:16.399362705 +0000 UTC m=+2625.313920076" Nov 25 12:22:01 crc kubenswrapper[4706]: I1125 12:22:01.124970 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:22:01 crc kubenswrapper[4706]: I1125 12:22:01.125595 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:22:31 crc kubenswrapper[4706]: I1125 12:22:31.125600 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:22:31 crc kubenswrapper[4706]: I1125 12:22:31.126362 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:23:01 crc kubenswrapper[4706]: I1125 12:23:01.124736 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:23:01 crc kubenswrapper[4706]: I1125 12:23:01.125267 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:23:01 crc kubenswrapper[4706]: I1125 12:23:01.125339 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" Nov 25 12:23:01 crc kubenswrapper[4706]: I1125 12:23:01.126167 4706 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b5c4a9b732ca8a1700914c594210046762bf19e4ab2732427a28f41c5179d529"} pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 12:23:01 crc kubenswrapper[4706]: I1125 12:23:01.126242 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" containerID="cri-o://b5c4a9b732ca8a1700914c594210046762bf19e4ab2732427a28f41c5179d529" gracePeriod=600 Nov 25 12:23:01 crc kubenswrapper[4706]: I1125 12:23:01.913654 4706 generic.go:334] "Generic (PLEG): container finished" podID="0930887a-320c-4506-8c9c-f94d6d64516a" containerID="b5c4a9b732ca8a1700914c594210046762bf19e4ab2732427a28f41c5179d529" exitCode=0 Nov 25 12:23:01 crc kubenswrapper[4706]: I1125 12:23:01.913755 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" event={"ID":"0930887a-320c-4506-8c9c-f94d6d64516a","Type":"ContainerDied","Data":"b5c4a9b732ca8a1700914c594210046762bf19e4ab2732427a28f41c5179d529"} Nov 25 12:23:01 crc kubenswrapper[4706]: I1125 12:23:01.914605 4706 scope.go:117] "RemoveContainer" containerID="02f070302d64fff80ca8166389e9c6c4cebd1119d10a5d1848c1ade4b03a9e54" Nov 25 12:23:02 crc kubenswrapper[4706]: I1125 12:23:02.927341 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" event={"ID":"0930887a-320c-4506-8c9c-f94d6d64516a","Type":"ContainerStarted","Data":"d3fc72500aae4cf4d62aeac19c69abc79c3346f9d07b751d825f4be172d122de"} Nov 25 12:23:05 crc kubenswrapper[4706]: I1125 12:23:05.973730 4706 generic.go:334] "Generic (PLEG): container finished" podID="f74a1106-ae1e-464c-a761-dc47c54c361c" containerID="bb57adf16826ceafe48bf1dfdd4fd5c754adbab001fb047f0ff386c9014ed1f1" exitCode=0 Nov 25 12:23:05 crc kubenswrapper[4706]: I1125 12:23:05.973776 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7" event={"ID":"f74a1106-ae1e-464c-a761-dc47c54c361c","Type":"ContainerDied","Data":"bb57adf16826ceafe48bf1dfdd4fd5c754adbab001fb047f0ff386c9014ed1f1"} Nov 25 12:23:07 crc kubenswrapper[4706]: I1125 12:23:07.386549 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7" Nov 25 12:23:07 crc kubenswrapper[4706]: I1125 12:23:07.539482 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-nova-cell1-compute-config-0\") pod \"f74a1106-ae1e-464c-a761-dc47c54c361c\" (UID: \"f74a1106-ae1e-464c-a761-dc47c54c361c\") " Nov 25 12:23:07 crc kubenswrapper[4706]: I1125 12:23:07.539525 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plzkt\" (UniqueName: \"kubernetes.io/projected/f74a1106-ae1e-464c-a761-dc47c54c361c-kube-api-access-plzkt\") pod \"f74a1106-ae1e-464c-a761-dc47c54c361c\" (UID: \"f74a1106-ae1e-464c-a761-dc47c54c361c\") " Nov 25 12:23:07 crc kubenswrapper[4706]: I1125 12:23:07.539553 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-nova-migration-ssh-key-0\") pod \"f74a1106-ae1e-464c-a761-dc47c54c361c\" (UID: \"f74a1106-ae1e-464c-a761-dc47c54c361c\") " Nov 25 12:23:07 crc kubenswrapper[4706]: I1125 12:23:07.539574 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-nova-combined-ca-bundle\") pod \"f74a1106-ae1e-464c-a761-dc47c54c361c\" (UID: \"f74a1106-ae1e-464c-a761-dc47c54c361c\") " Nov 25 12:23:07 crc kubenswrapper[4706]: I1125 12:23:07.539596 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-ssh-key\") pod \"f74a1106-ae1e-464c-a761-dc47c54c361c\" (UID: \"f74a1106-ae1e-464c-a761-dc47c54c361c\") " Nov 25 12:23:07 crc kubenswrapper[4706]: I1125 12:23:07.539627 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-nova-migration-ssh-key-1\") pod \"f74a1106-ae1e-464c-a761-dc47c54c361c\" (UID: \"f74a1106-ae1e-464c-a761-dc47c54c361c\") " Nov 25 12:23:07 crc kubenswrapper[4706]: I1125 12:23:07.539658 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f74a1106-ae1e-464c-a761-dc47c54c361c-nova-extra-config-0\") pod \"f74a1106-ae1e-464c-a761-dc47c54c361c\" (UID: \"f74a1106-ae1e-464c-a761-dc47c54c361c\") " Nov 25 12:23:07 crc kubenswrapper[4706]: I1125 12:23:07.539676 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-inventory\") pod \"f74a1106-ae1e-464c-a761-dc47c54c361c\" (UID: \"f74a1106-ae1e-464c-a761-dc47c54c361c\") " Nov 25 12:23:07 crc kubenswrapper[4706]: I1125 12:23:07.539750 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-nova-cell1-compute-config-1\") pod \"f74a1106-ae1e-464c-a761-dc47c54c361c\" (UID: \"f74a1106-ae1e-464c-a761-dc47c54c361c\") " Nov 25 12:23:07 crc kubenswrapper[4706]: I1125 12:23:07.551759 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f74a1106-ae1e-464c-a761-dc47c54c361c-kube-api-access-plzkt" (OuterVolumeSpecName: "kube-api-access-plzkt") pod "f74a1106-ae1e-464c-a761-dc47c54c361c" (UID: "f74a1106-ae1e-464c-a761-dc47c54c361c"). InnerVolumeSpecName "kube-api-access-plzkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:23:07 crc kubenswrapper[4706]: I1125 12:23:07.552359 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "f74a1106-ae1e-464c-a761-dc47c54c361c" (UID: "f74a1106-ae1e-464c-a761-dc47c54c361c"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:23:07 crc kubenswrapper[4706]: I1125 12:23:07.569949 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "f74a1106-ae1e-464c-a761-dc47c54c361c" (UID: "f74a1106-ae1e-464c-a761-dc47c54c361c"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:23:07 crc kubenswrapper[4706]: I1125 12:23:07.572241 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "f74a1106-ae1e-464c-a761-dc47c54c361c" (UID: "f74a1106-ae1e-464c-a761-dc47c54c361c"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:23:07 crc kubenswrapper[4706]: I1125 12:23:07.572350 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f74a1106-ae1e-464c-a761-dc47c54c361c-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "f74a1106-ae1e-464c-a761-dc47c54c361c" (UID: "f74a1106-ae1e-464c-a761-dc47c54c361c"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:23:07 crc kubenswrapper[4706]: I1125 12:23:07.573993 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f74a1106-ae1e-464c-a761-dc47c54c361c" (UID: "f74a1106-ae1e-464c-a761-dc47c54c361c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:23:07 crc kubenswrapper[4706]: I1125 12:23:07.587508 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-inventory" (OuterVolumeSpecName: "inventory") pod "f74a1106-ae1e-464c-a761-dc47c54c361c" (UID: "f74a1106-ae1e-464c-a761-dc47c54c361c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:23:07 crc kubenswrapper[4706]: I1125 12:23:07.588737 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "f74a1106-ae1e-464c-a761-dc47c54c361c" (UID: "f74a1106-ae1e-464c-a761-dc47c54c361c"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:23:07 crc kubenswrapper[4706]: I1125 12:23:07.611068 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "f74a1106-ae1e-464c-a761-dc47c54c361c" (UID: "f74a1106-ae1e-464c-a761-dc47c54c361c"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:23:07 crc kubenswrapper[4706]: I1125 12:23:07.641941 4706 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 25 12:23:07 crc kubenswrapper[4706]: I1125 12:23:07.641988 4706 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f74a1106-ae1e-464c-a761-dc47c54c361c-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:23:07 crc kubenswrapper[4706]: I1125 12:23:07.642002 4706 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 12:23:07 crc kubenswrapper[4706]: I1125 12:23:07.642015 4706 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 25 12:23:07 crc kubenswrapper[4706]: I1125 12:23:07.642028 4706 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:23:07 crc kubenswrapper[4706]: I1125 12:23:07.642039 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plzkt\" (UniqueName: \"kubernetes.io/projected/f74a1106-ae1e-464c-a761-dc47c54c361c-kube-api-access-plzkt\") on node \"crc\" DevicePath \"\"" Nov 25 12:23:07 crc kubenswrapper[4706]: I1125 12:23:07.642052 4706 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:23:07 crc kubenswrapper[4706]: I1125 12:23:07.642064 4706 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:23:07 crc kubenswrapper[4706]: I1125 12:23:07.642077 4706 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f74a1106-ae1e-464c-a761-dc47c54c361c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 12:23:07 crc kubenswrapper[4706]: I1125 12:23:07.993048 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7" event={"ID":"f74a1106-ae1e-464c-a761-dc47c54c361c","Type":"ContainerDied","Data":"23c90f1eb3ff17ee5c2f0162090790d9cbe7c633609226d90a2d9c274e5dc7b5"} Nov 25 12:23:07 crc kubenswrapper[4706]: I1125 12:23:07.993096 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23c90f1eb3ff17ee5c2f0162090790d9cbe7c633609226d90a2d9c274e5dc7b5" Nov 25 12:23:07 crc kubenswrapper[4706]: I1125 12:23:07.993102 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67xt7" Nov 25 12:23:08 crc kubenswrapper[4706]: I1125 12:23:08.168286 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj"] Nov 25 12:23:08 crc kubenswrapper[4706]: E1125 12:23:08.168753 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f74a1106-ae1e-464c-a761-dc47c54c361c" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 25 12:23:08 crc kubenswrapper[4706]: I1125 12:23:08.168778 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="f74a1106-ae1e-464c-a761-dc47c54c361c" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 25 12:23:08 crc kubenswrapper[4706]: I1125 12:23:08.169020 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="f74a1106-ae1e-464c-a761-dc47c54c361c" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 25 12:23:08 crc kubenswrapper[4706]: I1125 12:23:08.169650 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj" Nov 25 12:23:08 crc kubenswrapper[4706]: I1125 12:23:08.174406 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 25 12:23:08 crc kubenswrapper[4706]: I1125 12:23:08.174421 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 12:23:08 crc kubenswrapper[4706]: I1125 12:23:08.175004 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:23:08 crc kubenswrapper[4706]: I1125 12:23:08.175149 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 12:23:08 crc kubenswrapper[4706]: I1125 12:23:08.175279 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r8qqp" Nov 25 12:23:08 crc kubenswrapper[4706]: I1125 12:23:08.180763 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj"] Nov 25 12:23:08 crc kubenswrapper[4706]: I1125 12:23:08.257806 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/10becdf1-f704-46ec-aee6-b4ef4fdbed09-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj\" (UID: \"10becdf1-f704-46ec-aee6-b4ef4fdbed09\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj" Nov 25 12:23:08 crc kubenswrapper[4706]: I1125 12:23:08.257924 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10becdf1-f704-46ec-aee6-b4ef4fdbed09-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj\" (UID: \"10becdf1-f704-46ec-aee6-b4ef4fdbed09\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj" Nov 25 12:23:08 crc kubenswrapper[4706]: I1125 12:23:08.257972 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10becdf1-f704-46ec-aee6-b4ef4fdbed09-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj\" (UID: \"10becdf1-f704-46ec-aee6-b4ef4fdbed09\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj" Nov 25 12:23:08 crc kubenswrapper[4706]: I1125 12:23:08.258001 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/10becdf1-f704-46ec-aee6-b4ef4fdbed09-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj\" (UID: \"10becdf1-f704-46ec-aee6-b4ef4fdbed09\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj" Nov 25 12:23:08 crc kubenswrapper[4706]: I1125 12:23:08.258049 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/10becdf1-f704-46ec-aee6-b4ef4fdbed09-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj\" (UID: \"10becdf1-f704-46ec-aee6-b4ef4fdbed09\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj" Nov 25 12:23:08 crc kubenswrapper[4706]: I1125 12:23:08.258081 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/10becdf1-f704-46ec-aee6-b4ef4fdbed09-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj\" (UID: \"10becdf1-f704-46ec-aee6-b4ef4fdbed09\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj" Nov 25 12:23:08 crc kubenswrapper[4706]: I1125 12:23:08.258164 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68ghg\" (UniqueName: \"kubernetes.io/projected/10becdf1-f704-46ec-aee6-b4ef4fdbed09-kube-api-access-68ghg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj\" (UID: \"10becdf1-f704-46ec-aee6-b4ef4fdbed09\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj" Nov 25 12:23:08 crc kubenswrapper[4706]: I1125 12:23:08.359513 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68ghg\" (UniqueName: \"kubernetes.io/projected/10becdf1-f704-46ec-aee6-b4ef4fdbed09-kube-api-access-68ghg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj\" (UID: \"10becdf1-f704-46ec-aee6-b4ef4fdbed09\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj" Nov 25 12:23:08 crc kubenswrapper[4706]: I1125 12:23:08.359619 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/10becdf1-f704-46ec-aee6-b4ef4fdbed09-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj\" (UID: \"10becdf1-f704-46ec-aee6-b4ef4fdbed09\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj" Nov 25 12:23:08 crc kubenswrapper[4706]: I1125 12:23:08.359664 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10becdf1-f704-46ec-aee6-b4ef4fdbed09-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj\" (UID: \"10becdf1-f704-46ec-aee6-b4ef4fdbed09\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj" Nov 25 12:23:08 crc kubenswrapper[4706]: I1125 12:23:08.359702 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10becdf1-f704-46ec-aee6-b4ef4fdbed09-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj\" (UID: \"10becdf1-f704-46ec-aee6-b4ef4fdbed09\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj" Nov 25 12:23:08 crc kubenswrapper[4706]: I1125 12:23:08.359733 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/10becdf1-f704-46ec-aee6-b4ef4fdbed09-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj\" (UID: \"10becdf1-f704-46ec-aee6-b4ef4fdbed09\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj" Nov 25 12:23:08 crc kubenswrapper[4706]: I1125 12:23:08.359772 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/10becdf1-f704-46ec-aee6-b4ef4fdbed09-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj\" (UID: \"10becdf1-f704-46ec-aee6-b4ef4fdbed09\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj" Nov 25 12:23:08 crc kubenswrapper[4706]: I1125 12:23:08.360580 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/10becdf1-f704-46ec-aee6-b4ef4fdbed09-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj\" (UID: \"10becdf1-f704-46ec-aee6-b4ef4fdbed09\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj" Nov 25 12:23:08 crc kubenswrapper[4706]: I1125 12:23:08.365251 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/10becdf1-f704-46ec-aee6-b4ef4fdbed09-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj\" (UID: \"10becdf1-f704-46ec-aee6-b4ef4fdbed09\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj" Nov 25 12:23:08 crc kubenswrapper[4706]: I1125 12:23:08.365547 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10becdf1-f704-46ec-aee6-b4ef4fdbed09-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj\" (UID: \"10becdf1-f704-46ec-aee6-b4ef4fdbed09\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj" Nov 25 12:23:08 crc kubenswrapper[4706]: I1125 12:23:08.365774 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/10becdf1-f704-46ec-aee6-b4ef4fdbed09-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj\" (UID: \"10becdf1-f704-46ec-aee6-b4ef4fdbed09\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj" Nov 25 12:23:08 crc kubenswrapper[4706]: I1125 12:23:08.366738 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/10becdf1-f704-46ec-aee6-b4ef4fdbed09-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj\" (UID: \"10becdf1-f704-46ec-aee6-b4ef4fdbed09\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj" Nov 25 12:23:08 crc kubenswrapper[4706]: I1125 12:23:08.368564 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10becdf1-f704-46ec-aee6-b4ef4fdbed09-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj\" (UID: \"10becdf1-f704-46ec-aee6-b4ef4fdbed09\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj" Nov 25 12:23:08 crc kubenswrapper[4706]: I1125 12:23:08.370366 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/10becdf1-f704-46ec-aee6-b4ef4fdbed09-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj\" (UID: \"10becdf1-f704-46ec-aee6-b4ef4fdbed09\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj" Nov 25 12:23:08 crc kubenswrapper[4706]: I1125 12:23:08.381473 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68ghg\" (UniqueName: \"kubernetes.io/projected/10becdf1-f704-46ec-aee6-b4ef4fdbed09-kube-api-access-68ghg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj\" (UID: \"10becdf1-f704-46ec-aee6-b4ef4fdbed09\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj" Nov 25 12:23:08 crc kubenswrapper[4706]: I1125 12:23:08.486351 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj" Nov 25 12:23:09 crc kubenswrapper[4706]: I1125 12:23:09.010858 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj"] Nov 25 12:23:09 crc kubenswrapper[4706]: W1125 12:23:09.012998 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10becdf1_f704_46ec_aee6_b4ef4fdbed09.slice/crio-3f54e438b90243e8d8c38d67d671c2f56b363d154095db4fe981bf547dcf35ff WatchSource:0}: Error finding container 3f54e438b90243e8d8c38d67d671c2f56b363d154095db4fe981bf547dcf35ff: Status 404 returned error can't find the container with id 3f54e438b90243e8d8c38d67d671c2f56b363d154095db4fe981bf547dcf35ff Nov 25 12:23:10 crc kubenswrapper[4706]: I1125 12:23:10.010522 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj" event={"ID":"10becdf1-f704-46ec-aee6-b4ef4fdbed09","Type":"ContainerStarted","Data":"3f54e438b90243e8d8c38d67d671c2f56b363d154095db4fe981bf547dcf35ff"} Nov 25 12:23:11 crc kubenswrapper[4706]: I1125 12:23:11.018895 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj" event={"ID":"10becdf1-f704-46ec-aee6-b4ef4fdbed09","Type":"ContainerStarted","Data":"b38f21b0605925748d0094bd561c43de984b44af1b7cf5ed6c8578e3086b9652"} Nov 25 12:23:11 crc kubenswrapper[4706]: I1125 12:23:11.056650 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj" podStartSLOduration=1.6492055909999999 podStartE2EDuration="3.056614292s" podCreationTimestamp="2025-11-25 12:23:08 +0000 UTC" firstStartedPulling="2025-11-25 12:23:09.015478108 +0000 UTC m=+2797.930035489" lastFinishedPulling="2025-11-25 12:23:10.422886809 +0000 UTC m=+2799.337444190" observedRunningTime="2025-11-25 12:23:11.034224692 +0000 UTC m=+2799.948782093" watchObservedRunningTime="2025-11-25 12:23:11.056614292 +0000 UTC m=+2799.971171683" Nov 25 12:23:37 crc kubenswrapper[4706]: I1125 12:23:37.852591 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8x7nf"] Nov 25 12:23:37 crc kubenswrapper[4706]: I1125 12:23:37.859271 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8x7nf" Nov 25 12:23:37 crc kubenswrapper[4706]: I1125 12:23:37.870266 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8x7nf"] Nov 25 12:23:38 crc kubenswrapper[4706]: I1125 12:23:38.033658 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/741537bb-cbbf-4d60-a6ac-7dc9462c0f18-utilities\") pod \"redhat-marketplace-8x7nf\" (UID: \"741537bb-cbbf-4d60-a6ac-7dc9462c0f18\") " pod="openshift-marketplace/redhat-marketplace-8x7nf" Nov 25 12:23:38 crc kubenswrapper[4706]: I1125 12:23:38.033703 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/741537bb-cbbf-4d60-a6ac-7dc9462c0f18-catalog-content\") pod \"redhat-marketplace-8x7nf\" (UID: \"741537bb-cbbf-4d60-a6ac-7dc9462c0f18\") " pod="openshift-marketplace/redhat-marketplace-8x7nf" Nov 25 12:23:38 crc kubenswrapper[4706]: I1125 12:23:38.033771 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj2xk\" (UniqueName: \"kubernetes.io/projected/741537bb-cbbf-4d60-a6ac-7dc9462c0f18-kube-api-access-cj2xk\") pod \"redhat-marketplace-8x7nf\" (UID: \"741537bb-cbbf-4d60-a6ac-7dc9462c0f18\") " pod="openshift-marketplace/redhat-marketplace-8x7nf" Nov 25 12:23:38 crc kubenswrapper[4706]: I1125 12:23:38.135286 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/741537bb-cbbf-4d60-a6ac-7dc9462c0f18-utilities\") pod \"redhat-marketplace-8x7nf\" (UID: \"741537bb-cbbf-4d60-a6ac-7dc9462c0f18\") " pod="openshift-marketplace/redhat-marketplace-8x7nf" Nov 25 12:23:38 crc kubenswrapper[4706]: I1125 12:23:38.135372 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/741537bb-cbbf-4d60-a6ac-7dc9462c0f18-catalog-content\") pod \"redhat-marketplace-8x7nf\" (UID: \"741537bb-cbbf-4d60-a6ac-7dc9462c0f18\") " pod="openshift-marketplace/redhat-marketplace-8x7nf" Nov 25 12:23:38 crc kubenswrapper[4706]: I1125 12:23:38.135421 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj2xk\" (UniqueName: \"kubernetes.io/projected/741537bb-cbbf-4d60-a6ac-7dc9462c0f18-kube-api-access-cj2xk\") pod \"redhat-marketplace-8x7nf\" (UID: \"741537bb-cbbf-4d60-a6ac-7dc9462c0f18\") " pod="openshift-marketplace/redhat-marketplace-8x7nf" Nov 25 12:23:38 crc kubenswrapper[4706]: I1125 12:23:38.135974 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/741537bb-cbbf-4d60-a6ac-7dc9462c0f18-utilities\") pod \"redhat-marketplace-8x7nf\" (UID: \"741537bb-cbbf-4d60-a6ac-7dc9462c0f18\") " pod="openshift-marketplace/redhat-marketplace-8x7nf" Nov 25 12:23:38 crc kubenswrapper[4706]: I1125 12:23:38.136056 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/741537bb-cbbf-4d60-a6ac-7dc9462c0f18-catalog-content\") pod \"redhat-marketplace-8x7nf\" (UID: \"741537bb-cbbf-4d60-a6ac-7dc9462c0f18\") " pod="openshift-marketplace/redhat-marketplace-8x7nf" Nov 25 12:23:38 crc kubenswrapper[4706]: I1125 12:23:38.156499 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj2xk\" (UniqueName: \"kubernetes.io/projected/741537bb-cbbf-4d60-a6ac-7dc9462c0f18-kube-api-access-cj2xk\") pod \"redhat-marketplace-8x7nf\" (UID: \"741537bb-cbbf-4d60-a6ac-7dc9462c0f18\") " pod="openshift-marketplace/redhat-marketplace-8x7nf" Nov 25 12:23:38 crc kubenswrapper[4706]: I1125 12:23:38.186519 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8x7nf" Nov 25 12:23:38 crc kubenswrapper[4706]: I1125 12:23:38.653894 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8x7nf"] Nov 25 12:23:39 crc kubenswrapper[4706]: I1125 12:23:39.259696 4706 generic.go:334] "Generic (PLEG): container finished" podID="741537bb-cbbf-4d60-a6ac-7dc9462c0f18" containerID="990212c569d243de54af8ec0337efb23cc8e1c08814537015c273e0cdf680aed" exitCode=0 Nov 25 12:23:39 crc kubenswrapper[4706]: I1125 12:23:39.259850 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8x7nf" event={"ID":"741537bb-cbbf-4d60-a6ac-7dc9462c0f18","Type":"ContainerDied","Data":"990212c569d243de54af8ec0337efb23cc8e1c08814537015c273e0cdf680aed"} Nov 25 12:23:39 crc kubenswrapper[4706]: I1125 12:23:39.262212 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8x7nf" event={"ID":"741537bb-cbbf-4d60-a6ac-7dc9462c0f18","Type":"ContainerStarted","Data":"ef0cc55f4e84d5bccc8ec15dbbcff16db2086a781ac35166cd7586cf2fdac619"} Nov 25 12:23:40 crc kubenswrapper[4706]: I1125 12:23:40.274619 4706 generic.go:334] "Generic (PLEG): container finished" podID="741537bb-cbbf-4d60-a6ac-7dc9462c0f18" containerID="134fbe8abc88728ea6d0b32e6d03750a6498197772c476a45210e5f260ecbb86" exitCode=0 Nov 25 12:23:40 crc kubenswrapper[4706]: I1125 12:23:40.274942 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8x7nf" event={"ID":"741537bb-cbbf-4d60-a6ac-7dc9462c0f18","Type":"ContainerDied","Data":"134fbe8abc88728ea6d0b32e6d03750a6498197772c476a45210e5f260ecbb86"} Nov 25 12:23:41 crc kubenswrapper[4706]: I1125 12:23:41.285241 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8x7nf" event={"ID":"741537bb-cbbf-4d60-a6ac-7dc9462c0f18","Type":"ContainerStarted","Data":"17013fbdac6bf17291aed25dc6716cbfd2343a17c71ae22b4f684da8829dc485"} Nov 25 12:23:48 crc kubenswrapper[4706]: I1125 12:23:48.186936 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8x7nf" Nov 25 12:23:48 crc kubenswrapper[4706]: I1125 12:23:48.187476 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8x7nf" Nov 25 12:23:48 crc kubenswrapper[4706]: I1125 12:23:48.234629 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8x7nf" Nov 25 12:23:48 crc kubenswrapper[4706]: I1125 12:23:48.252229 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8x7nf" podStartSLOduration=9.841185651 podStartE2EDuration="11.252210143s" podCreationTimestamp="2025-11-25 12:23:37 +0000 UTC" firstStartedPulling="2025-11-25 12:23:39.261044393 +0000 UTC m=+2828.175601774" lastFinishedPulling="2025-11-25 12:23:40.672068875 +0000 UTC m=+2829.586626266" observedRunningTime="2025-11-25 12:23:41.309889063 +0000 UTC m=+2830.224446444" watchObservedRunningTime="2025-11-25 12:23:48.252210143 +0000 UTC m=+2837.166767534" Nov 25 12:23:48 crc kubenswrapper[4706]: I1125 12:23:48.390692 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8x7nf" Nov 25 12:23:48 crc kubenswrapper[4706]: I1125 12:23:48.468147 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8x7nf"] Nov 25 12:23:50 crc kubenswrapper[4706]: I1125 12:23:50.368888 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8x7nf" podUID="741537bb-cbbf-4d60-a6ac-7dc9462c0f18" containerName="registry-server" containerID="cri-o://17013fbdac6bf17291aed25dc6716cbfd2343a17c71ae22b4f684da8829dc485" gracePeriod=2 Nov 25 12:23:50 crc kubenswrapper[4706]: I1125 12:23:50.793829 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8x7nf" Nov 25 12:23:50 crc kubenswrapper[4706]: I1125 12:23:50.895391 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cj2xk\" (UniqueName: \"kubernetes.io/projected/741537bb-cbbf-4d60-a6ac-7dc9462c0f18-kube-api-access-cj2xk\") pod \"741537bb-cbbf-4d60-a6ac-7dc9462c0f18\" (UID: \"741537bb-cbbf-4d60-a6ac-7dc9462c0f18\") " Nov 25 12:23:50 crc kubenswrapper[4706]: I1125 12:23:50.895610 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/741537bb-cbbf-4d60-a6ac-7dc9462c0f18-catalog-content\") pod \"741537bb-cbbf-4d60-a6ac-7dc9462c0f18\" (UID: \"741537bb-cbbf-4d60-a6ac-7dc9462c0f18\") " Nov 25 12:23:50 crc kubenswrapper[4706]: I1125 12:23:50.895653 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/741537bb-cbbf-4d60-a6ac-7dc9462c0f18-utilities\") pod \"741537bb-cbbf-4d60-a6ac-7dc9462c0f18\" (UID: \"741537bb-cbbf-4d60-a6ac-7dc9462c0f18\") " Nov 25 12:23:50 crc kubenswrapper[4706]: I1125 12:23:50.897212 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/741537bb-cbbf-4d60-a6ac-7dc9462c0f18-utilities" (OuterVolumeSpecName: "utilities") pod "741537bb-cbbf-4d60-a6ac-7dc9462c0f18" (UID: "741537bb-cbbf-4d60-a6ac-7dc9462c0f18"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:23:50 crc kubenswrapper[4706]: I1125 12:23:50.903857 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/741537bb-cbbf-4d60-a6ac-7dc9462c0f18-kube-api-access-cj2xk" (OuterVolumeSpecName: "kube-api-access-cj2xk") pod "741537bb-cbbf-4d60-a6ac-7dc9462c0f18" (UID: "741537bb-cbbf-4d60-a6ac-7dc9462c0f18"). InnerVolumeSpecName "kube-api-access-cj2xk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:23:50 crc kubenswrapper[4706]: I1125 12:23:50.916672 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/741537bb-cbbf-4d60-a6ac-7dc9462c0f18-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "741537bb-cbbf-4d60-a6ac-7dc9462c0f18" (UID: "741537bb-cbbf-4d60-a6ac-7dc9462c0f18"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:23:50 crc kubenswrapper[4706]: I1125 12:23:50.998127 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cj2xk\" (UniqueName: \"kubernetes.io/projected/741537bb-cbbf-4d60-a6ac-7dc9462c0f18-kube-api-access-cj2xk\") on node \"crc\" DevicePath \"\"" Nov 25 12:23:50 crc kubenswrapper[4706]: I1125 12:23:50.998167 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/741537bb-cbbf-4d60-a6ac-7dc9462c0f18-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:23:50 crc kubenswrapper[4706]: I1125 12:23:50.998178 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/741537bb-cbbf-4d60-a6ac-7dc9462c0f18-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:23:51 crc kubenswrapper[4706]: I1125 12:23:51.383144 4706 generic.go:334] "Generic (PLEG): container finished" podID="741537bb-cbbf-4d60-a6ac-7dc9462c0f18" containerID="17013fbdac6bf17291aed25dc6716cbfd2343a17c71ae22b4f684da8829dc485" exitCode=0 Nov 25 12:23:51 crc kubenswrapper[4706]: I1125 12:23:51.383216 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8x7nf" event={"ID":"741537bb-cbbf-4d60-a6ac-7dc9462c0f18","Type":"ContainerDied","Data":"17013fbdac6bf17291aed25dc6716cbfd2343a17c71ae22b4f684da8829dc485"} Nov 25 12:23:51 crc kubenswrapper[4706]: I1125 12:23:51.383258 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8x7nf" event={"ID":"741537bb-cbbf-4d60-a6ac-7dc9462c0f18","Type":"ContainerDied","Data":"ef0cc55f4e84d5bccc8ec15dbbcff16db2086a781ac35166cd7586cf2fdac619"} Nov 25 12:23:51 crc kubenswrapper[4706]: I1125 12:23:51.383289 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8x7nf" Nov 25 12:23:51 crc kubenswrapper[4706]: I1125 12:23:51.383298 4706 scope.go:117] "RemoveContainer" containerID="17013fbdac6bf17291aed25dc6716cbfd2343a17c71ae22b4f684da8829dc485" Nov 25 12:23:51 crc kubenswrapper[4706]: I1125 12:23:51.405637 4706 scope.go:117] "RemoveContainer" containerID="134fbe8abc88728ea6d0b32e6d03750a6498197772c476a45210e5f260ecbb86" Nov 25 12:23:51 crc kubenswrapper[4706]: I1125 12:23:51.420826 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8x7nf"] Nov 25 12:23:51 crc kubenswrapper[4706]: I1125 12:23:51.428872 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8x7nf"] Nov 25 12:23:51 crc kubenswrapper[4706]: I1125 12:23:51.440744 4706 scope.go:117] "RemoveContainer" containerID="990212c569d243de54af8ec0337efb23cc8e1c08814537015c273e0cdf680aed" Nov 25 12:23:51 crc kubenswrapper[4706]: I1125 12:23:51.495476 4706 scope.go:117] "RemoveContainer" containerID="17013fbdac6bf17291aed25dc6716cbfd2343a17c71ae22b4f684da8829dc485" Nov 25 12:23:51 crc kubenswrapper[4706]: E1125 12:23:51.504741 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17013fbdac6bf17291aed25dc6716cbfd2343a17c71ae22b4f684da8829dc485\": container with ID starting with 17013fbdac6bf17291aed25dc6716cbfd2343a17c71ae22b4f684da8829dc485 not found: ID does not exist" containerID="17013fbdac6bf17291aed25dc6716cbfd2343a17c71ae22b4f684da8829dc485" Nov 25 12:23:51 crc kubenswrapper[4706]: I1125 12:23:51.504821 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17013fbdac6bf17291aed25dc6716cbfd2343a17c71ae22b4f684da8829dc485"} err="failed to get container status \"17013fbdac6bf17291aed25dc6716cbfd2343a17c71ae22b4f684da8829dc485\": rpc error: code = NotFound desc = could not find container \"17013fbdac6bf17291aed25dc6716cbfd2343a17c71ae22b4f684da8829dc485\": container with ID starting with 17013fbdac6bf17291aed25dc6716cbfd2343a17c71ae22b4f684da8829dc485 not found: ID does not exist" Nov 25 12:23:51 crc kubenswrapper[4706]: I1125 12:23:51.504859 4706 scope.go:117] "RemoveContainer" containerID="134fbe8abc88728ea6d0b32e6d03750a6498197772c476a45210e5f260ecbb86" Nov 25 12:23:51 crc kubenswrapper[4706]: E1125 12:23:51.506174 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"134fbe8abc88728ea6d0b32e6d03750a6498197772c476a45210e5f260ecbb86\": container with ID starting with 134fbe8abc88728ea6d0b32e6d03750a6498197772c476a45210e5f260ecbb86 not found: ID does not exist" containerID="134fbe8abc88728ea6d0b32e6d03750a6498197772c476a45210e5f260ecbb86" Nov 25 12:23:51 crc kubenswrapper[4706]: I1125 12:23:51.506233 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"134fbe8abc88728ea6d0b32e6d03750a6498197772c476a45210e5f260ecbb86"} err="failed to get container status \"134fbe8abc88728ea6d0b32e6d03750a6498197772c476a45210e5f260ecbb86\": rpc error: code = NotFound desc = could not find container \"134fbe8abc88728ea6d0b32e6d03750a6498197772c476a45210e5f260ecbb86\": container with ID starting with 134fbe8abc88728ea6d0b32e6d03750a6498197772c476a45210e5f260ecbb86 not found: ID does not exist" Nov 25 12:23:51 crc kubenswrapper[4706]: I1125 12:23:51.506272 4706 scope.go:117] "RemoveContainer" containerID="990212c569d243de54af8ec0337efb23cc8e1c08814537015c273e0cdf680aed" Nov 25 12:23:51 crc kubenswrapper[4706]: E1125 12:23:51.509596 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"990212c569d243de54af8ec0337efb23cc8e1c08814537015c273e0cdf680aed\": container with ID starting with 990212c569d243de54af8ec0337efb23cc8e1c08814537015c273e0cdf680aed not found: ID does not exist" containerID="990212c569d243de54af8ec0337efb23cc8e1c08814537015c273e0cdf680aed" Nov 25 12:23:51 crc kubenswrapper[4706]: I1125 12:23:51.509657 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"990212c569d243de54af8ec0337efb23cc8e1c08814537015c273e0cdf680aed"} err="failed to get container status \"990212c569d243de54af8ec0337efb23cc8e1c08814537015c273e0cdf680aed\": rpc error: code = NotFound desc = could not find container \"990212c569d243de54af8ec0337efb23cc8e1c08814537015c273e0cdf680aed\": container with ID starting with 990212c569d243de54af8ec0337efb23cc8e1c08814537015c273e0cdf680aed not found: ID does not exist" Nov 25 12:23:51 crc kubenswrapper[4706]: I1125 12:23:51.935618 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="741537bb-cbbf-4d60-a6ac-7dc9462c0f18" path="/var/lib/kubelet/pods/741537bb-cbbf-4d60-a6ac-7dc9462c0f18/volumes" Nov 25 12:23:58 crc kubenswrapper[4706]: I1125 12:23:58.788290 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d5vz9"] Nov 25 12:23:58 crc kubenswrapper[4706]: E1125 12:23:58.790386 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="741537bb-cbbf-4d60-a6ac-7dc9462c0f18" containerName="extract-utilities" Nov 25 12:23:58 crc kubenswrapper[4706]: I1125 12:23:58.790494 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="741537bb-cbbf-4d60-a6ac-7dc9462c0f18" containerName="extract-utilities" Nov 25 12:23:58 crc kubenswrapper[4706]: E1125 12:23:58.790590 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="741537bb-cbbf-4d60-a6ac-7dc9462c0f18" containerName="registry-server" Nov 25 12:23:58 crc kubenswrapper[4706]: I1125 12:23:58.790668 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="741537bb-cbbf-4d60-a6ac-7dc9462c0f18" containerName="registry-server" Nov 25 12:23:58 crc kubenswrapper[4706]: E1125 12:23:58.790814 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="741537bb-cbbf-4d60-a6ac-7dc9462c0f18" containerName="extract-content" Nov 25 12:23:58 crc kubenswrapper[4706]: I1125 12:23:58.790884 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="741537bb-cbbf-4d60-a6ac-7dc9462c0f18" containerName="extract-content" Nov 25 12:23:58 crc kubenswrapper[4706]: I1125 12:23:58.793093 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="741537bb-cbbf-4d60-a6ac-7dc9462c0f18" containerName="registry-server" Nov 25 12:23:58 crc kubenswrapper[4706]: I1125 12:23:58.795018 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d5vz9" Nov 25 12:23:58 crc kubenswrapper[4706]: I1125 12:23:58.801244 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d5vz9"] Nov 25 12:23:58 crc kubenswrapper[4706]: I1125 12:23:58.862604 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/197ee821-5613-4857-b9d5-6d180e630564-catalog-content\") pod \"certified-operators-d5vz9\" (UID: \"197ee821-5613-4857-b9d5-6d180e630564\") " pod="openshift-marketplace/certified-operators-d5vz9" Nov 25 12:23:58 crc kubenswrapper[4706]: I1125 12:23:58.863475 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg6nt\" (UniqueName: \"kubernetes.io/projected/197ee821-5613-4857-b9d5-6d180e630564-kube-api-access-zg6nt\") pod \"certified-operators-d5vz9\" (UID: \"197ee821-5613-4857-b9d5-6d180e630564\") " pod="openshift-marketplace/certified-operators-d5vz9" Nov 25 12:23:58 crc kubenswrapper[4706]: I1125 12:23:58.863539 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/197ee821-5613-4857-b9d5-6d180e630564-utilities\") pod \"certified-operators-d5vz9\" (UID: \"197ee821-5613-4857-b9d5-6d180e630564\") " pod="openshift-marketplace/certified-operators-d5vz9" Nov 25 12:23:58 crc kubenswrapper[4706]: I1125 12:23:58.965480 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/197ee821-5613-4857-b9d5-6d180e630564-catalog-content\") pod \"certified-operators-d5vz9\" (UID: \"197ee821-5613-4857-b9d5-6d180e630564\") " pod="openshift-marketplace/certified-operators-d5vz9" Nov 25 12:23:58 crc kubenswrapper[4706]: I1125 12:23:58.966322 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/197ee821-5613-4857-b9d5-6d180e630564-catalog-content\") pod \"certified-operators-d5vz9\" (UID: \"197ee821-5613-4857-b9d5-6d180e630564\") " pod="openshift-marketplace/certified-operators-d5vz9" Nov 25 12:23:58 crc kubenswrapper[4706]: I1125 12:23:58.967910 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg6nt\" (UniqueName: \"kubernetes.io/projected/197ee821-5613-4857-b9d5-6d180e630564-kube-api-access-zg6nt\") pod \"certified-operators-d5vz9\" (UID: \"197ee821-5613-4857-b9d5-6d180e630564\") " pod="openshift-marketplace/certified-operators-d5vz9" Nov 25 12:23:58 crc kubenswrapper[4706]: I1125 12:23:58.968517 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/197ee821-5613-4857-b9d5-6d180e630564-utilities\") pod \"certified-operators-d5vz9\" (UID: \"197ee821-5613-4857-b9d5-6d180e630564\") " pod="openshift-marketplace/certified-operators-d5vz9" Nov 25 12:23:58 crc kubenswrapper[4706]: I1125 12:23:58.969178 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/197ee821-5613-4857-b9d5-6d180e630564-utilities\") pod \"certified-operators-d5vz9\" (UID: \"197ee821-5613-4857-b9d5-6d180e630564\") " pod="openshift-marketplace/certified-operators-d5vz9" Nov 25 12:23:58 crc kubenswrapper[4706]: I1125 12:23:58.986698 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg6nt\" (UniqueName: \"kubernetes.io/projected/197ee821-5613-4857-b9d5-6d180e630564-kube-api-access-zg6nt\") pod \"certified-operators-d5vz9\" (UID: \"197ee821-5613-4857-b9d5-6d180e630564\") " pod="openshift-marketplace/certified-operators-d5vz9" Nov 25 12:23:59 crc kubenswrapper[4706]: I1125 12:23:59.114246 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d5vz9" Nov 25 12:23:59 crc kubenswrapper[4706]: I1125 12:23:59.453474 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d5vz9"] Nov 25 12:23:59 crc kubenswrapper[4706]: I1125 12:23:59.485612 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d5vz9" event={"ID":"197ee821-5613-4857-b9d5-6d180e630564","Type":"ContainerStarted","Data":"acc3529fb896e15988a1baca75fce4223d955eb926c482d432b99f6d398b4df8"} Nov 25 12:24:00 crc kubenswrapper[4706]: I1125 12:24:00.498638 4706 generic.go:334] "Generic (PLEG): container finished" podID="197ee821-5613-4857-b9d5-6d180e630564" containerID="6dd442e2b0000863451436a7f66620899c6ce66ad714f93ecbbde4588ed1df08" exitCode=0 Nov 25 12:24:00 crc kubenswrapper[4706]: I1125 12:24:00.498743 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d5vz9" event={"ID":"197ee821-5613-4857-b9d5-6d180e630564","Type":"ContainerDied","Data":"6dd442e2b0000863451436a7f66620899c6ce66ad714f93ecbbde4588ed1df08"} Nov 25 12:24:02 crc kubenswrapper[4706]: I1125 12:24:02.519284 4706 generic.go:334] "Generic (PLEG): container finished" podID="197ee821-5613-4857-b9d5-6d180e630564" containerID="692bce0071c74d74d55849b1651377cb75835d1975f62564d1e57b44ce19f06e" exitCode=0 Nov 25 12:24:02 crc kubenswrapper[4706]: I1125 12:24:02.519361 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d5vz9" event={"ID":"197ee821-5613-4857-b9d5-6d180e630564","Type":"ContainerDied","Data":"692bce0071c74d74d55849b1651377cb75835d1975f62564d1e57b44ce19f06e"} Nov 25 12:24:04 crc kubenswrapper[4706]: I1125 12:24:04.540333 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d5vz9" event={"ID":"197ee821-5613-4857-b9d5-6d180e630564","Type":"ContainerStarted","Data":"fdb61aab7293906a1b5b2acfb417826035bc3b850cd8f5bcb2efef2f1c5f255c"} Nov 25 12:24:04 crc kubenswrapper[4706]: I1125 12:24:04.571240 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d5vz9" podStartSLOduration=3.784864465 podStartE2EDuration="6.571216231s" podCreationTimestamp="2025-11-25 12:23:58 +0000 UTC" firstStartedPulling="2025-11-25 12:24:00.500707682 +0000 UTC m=+2849.415265063" lastFinishedPulling="2025-11-25 12:24:03.287059448 +0000 UTC m=+2852.201616829" observedRunningTime="2025-11-25 12:24:04.557341878 +0000 UTC m=+2853.471899269" watchObservedRunningTime="2025-11-25 12:24:04.571216231 +0000 UTC m=+2853.485773612" Nov 25 12:24:09 crc kubenswrapper[4706]: I1125 12:24:09.060884 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pfr69"] Nov 25 12:24:09 crc kubenswrapper[4706]: I1125 12:24:09.063894 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pfr69" Nov 25 12:24:09 crc kubenswrapper[4706]: I1125 12:24:09.106469 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pfr69"] Nov 25 12:24:09 crc kubenswrapper[4706]: I1125 12:24:09.115352 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d5vz9" Nov 25 12:24:09 crc kubenswrapper[4706]: I1125 12:24:09.116184 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d5vz9" Nov 25 12:24:09 crc kubenswrapper[4706]: I1125 12:24:09.168698 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8dkh\" (UniqueName: \"kubernetes.io/projected/de34b84a-9787-40ed-b4d6-ba2803bb62bb-kube-api-access-k8dkh\") pod \"community-operators-pfr69\" (UID: \"de34b84a-9787-40ed-b4d6-ba2803bb62bb\") " pod="openshift-marketplace/community-operators-pfr69" Nov 25 12:24:09 crc kubenswrapper[4706]: I1125 12:24:09.168760 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de34b84a-9787-40ed-b4d6-ba2803bb62bb-catalog-content\") pod \"community-operators-pfr69\" (UID: \"de34b84a-9787-40ed-b4d6-ba2803bb62bb\") " pod="openshift-marketplace/community-operators-pfr69" Nov 25 12:24:09 crc kubenswrapper[4706]: I1125 12:24:09.168792 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d5vz9" Nov 25 12:24:09 crc kubenswrapper[4706]: I1125 12:24:09.168915 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de34b84a-9787-40ed-b4d6-ba2803bb62bb-utilities\") pod \"community-operators-pfr69\" (UID: \"de34b84a-9787-40ed-b4d6-ba2803bb62bb\") " pod="openshift-marketplace/community-operators-pfr69" Nov 25 12:24:09 crc kubenswrapper[4706]: I1125 12:24:09.270962 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de34b84a-9787-40ed-b4d6-ba2803bb62bb-catalog-content\") pod \"community-operators-pfr69\" (UID: \"de34b84a-9787-40ed-b4d6-ba2803bb62bb\") " pod="openshift-marketplace/community-operators-pfr69" Nov 25 12:24:09 crc kubenswrapper[4706]: I1125 12:24:09.271106 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de34b84a-9787-40ed-b4d6-ba2803bb62bb-utilities\") pod \"community-operators-pfr69\" (UID: \"de34b84a-9787-40ed-b4d6-ba2803bb62bb\") " pod="openshift-marketplace/community-operators-pfr69" Nov 25 12:24:09 crc kubenswrapper[4706]: I1125 12:24:09.271206 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8dkh\" (UniqueName: \"kubernetes.io/projected/de34b84a-9787-40ed-b4d6-ba2803bb62bb-kube-api-access-k8dkh\") pod \"community-operators-pfr69\" (UID: \"de34b84a-9787-40ed-b4d6-ba2803bb62bb\") " pod="openshift-marketplace/community-operators-pfr69" Nov 25 12:24:09 crc kubenswrapper[4706]: I1125 12:24:09.272043 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de34b84a-9787-40ed-b4d6-ba2803bb62bb-catalog-content\") pod \"community-operators-pfr69\" (UID: \"de34b84a-9787-40ed-b4d6-ba2803bb62bb\") " pod="openshift-marketplace/community-operators-pfr69" Nov 25 12:24:09 crc kubenswrapper[4706]: I1125 12:24:09.272269 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de34b84a-9787-40ed-b4d6-ba2803bb62bb-utilities\") pod \"community-operators-pfr69\" (UID: \"de34b84a-9787-40ed-b4d6-ba2803bb62bb\") " pod="openshift-marketplace/community-operators-pfr69" Nov 25 12:24:09 crc kubenswrapper[4706]: I1125 12:24:09.293430 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8dkh\" (UniqueName: \"kubernetes.io/projected/de34b84a-9787-40ed-b4d6-ba2803bb62bb-kube-api-access-k8dkh\") pod \"community-operators-pfr69\" (UID: \"de34b84a-9787-40ed-b4d6-ba2803bb62bb\") " pod="openshift-marketplace/community-operators-pfr69" Nov 25 12:24:09 crc kubenswrapper[4706]: I1125 12:24:09.397677 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pfr69" Nov 25 12:24:09 crc kubenswrapper[4706]: I1125 12:24:09.668715 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d5vz9" Nov 25 12:24:10 crc kubenswrapper[4706]: I1125 12:24:10.011058 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pfr69"] Nov 25 12:24:10 crc kubenswrapper[4706]: I1125 12:24:10.607872 4706 generic.go:334] "Generic (PLEG): container finished" podID="de34b84a-9787-40ed-b4d6-ba2803bb62bb" containerID="92b5550a0f77568e9075f7ea0ac5857235869f6f8cb590192bb2142d3d807aa7" exitCode=0 Nov 25 12:24:10 crc kubenswrapper[4706]: I1125 12:24:10.608560 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pfr69" event={"ID":"de34b84a-9787-40ed-b4d6-ba2803bb62bb","Type":"ContainerDied","Data":"92b5550a0f77568e9075f7ea0ac5857235869f6f8cb590192bb2142d3d807aa7"} Nov 25 12:24:10 crc kubenswrapper[4706]: I1125 12:24:10.608657 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pfr69" event={"ID":"de34b84a-9787-40ed-b4d6-ba2803bb62bb","Type":"ContainerStarted","Data":"036f3f49071f980d595ef5bab64adbe8a0c643ace364316dd23622ceb3d995c3"} Nov 25 12:24:11 crc kubenswrapper[4706]: I1125 12:24:11.434425 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d5vz9"] Nov 25 12:24:11 crc kubenswrapper[4706]: I1125 12:24:11.620190 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pfr69" event={"ID":"de34b84a-9787-40ed-b4d6-ba2803bb62bb","Type":"ContainerStarted","Data":"a40a55308085320132d8d0b34d2a63de62fb8f2338932b8e3ab00f4a2cb666c3"} Nov 25 12:24:12 crc kubenswrapper[4706]: I1125 12:24:12.634790 4706 generic.go:334] "Generic (PLEG): container finished" podID="de34b84a-9787-40ed-b4d6-ba2803bb62bb" containerID="a40a55308085320132d8d0b34d2a63de62fb8f2338932b8e3ab00f4a2cb666c3" exitCode=0 Nov 25 12:24:12 crc kubenswrapper[4706]: I1125 12:24:12.635254 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pfr69" event={"ID":"de34b84a-9787-40ed-b4d6-ba2803bb62bb","Type":"ContainerDied","Data":"a40a55308085320132d8d0b34d2a63de62fb8f2338932b8e3ab00f4a2cb666c3"} Nov 25 12:24:12 crc kubenswrapper[4706]: I1125 12:24:12.635345 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pfr69" event={"ID":"de34b84a-9787-40ed-b4d6-ba2803bb62bb","Type":"ContainerStarted","Data":"4a59f346abf393a71e85e7f1fb279a5e05529621b3f66517ab6281de6737da4c"} Nov 25 12:24:12 crc kubenswrapper[4706]: I1125 12:24:12.635569 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d5vz9" podUID="197ee821-5613-4857-b9d5-6d180e630564" containerName="registry-server" containerID="cri-o://fdb61aab7293906a1b5b2acfb417826035bc3b850cd8f5bcb2efef2f1c5f255c" gracePeriod=2 Nov 25 12:24:13 crc kubenswrapper[4706]: I1125 12:24:13.122667 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d5vz9" Nov 25 12:24:13 crc kubenswrapper[4706]: I1125 12:24:13.144364 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pfr69" podStartSLOduration=2.739281428 podStartE2EDuration="4.144343659s" podCreationTimestamp="2025-11-25 12:24:09 +0000 UTC" firstStartedPulling="2025-11-25 12:24:10.610193543 +0000 UTC m=+2859.524750924" lastFinishedPulling="2025-11-25 12:24:12.015255774 +0000 UTC m=+2860.929813155" observedRunningTime="2025-11-25 12:24:12.658440349 +0000 UTC m=+2861.572997750" watchObservedRunningTime="2025-11-25 12:24:13.144343659 +0000 UTC m=+2862.058901040" Nov 25 12:24:13 crc kubenswrapper[4706]: I1125 12:24:13.249203 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/197ee821-5613-4857-b9d5-6d180e630564-utilities\") pod \"197ee821-5613-4857-b9d5-6d180e630564\" (UID: \"197ee821-5613-4857-b9d5-6d180e630564\") " Nov 25 12:24:13 crc kubenswrapper[4706]: I1125 12:24:13.249315 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg6nt\" (UniqueName: \"kubernetes.io/projected/197ee821-5613-4857-b9d5-6d180e630564-kube-api-access-zg6nt\") pod \"197ee821-5613-4857-b9d5-6d180e630564\" (UID: \"197ee821-5613-4857-b9d5-6d180e630564\") " Nov 25 12:24:13 crc kubenswrapper[4706]: I1125 12:24:13.249406 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/197ee821-5613-4857-b9d5-6d180e630564-catalog-content\") pod \"197ee821-5613-4857-b9d5-6d180e630564\" (UID: \"197ee821-5613-4857-b9d5-6d180e630564\") " Nov 25 12:24:13 crc kubenswrapper[4706]: I1125 12:24:13.249910 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/197ee821-5613-4857-b9d5-6d180e630564-utilities" (OuterVolumeSpecName: "utilities") pod "197ee821-5613-4857-b9d5-6d180e630564" (UID: "197ee821-5613-4857-b9d5-6d180e630564"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:24:13 crc kubenswrapper[4706]: I1125 12:24:13.256185 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/197ee821-5613-4857-b9d5-6d180e630564-kube-api-access-zg6nt" (OuterVolumeSpecName: "kube-api-access-zg6nt") pod "197ee821-5613-4857-b9d5-6d180e630564" (UID: "197ee821-5613-4857-b9d5-6d180e630564"). InnerVolumeSpecName "kube-api-access-zg6nt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:24:13 crc kubenswrapper[4706]: I1125 12:24:13.295244 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/197ee821-5613-4857-b9d5-6d180e630564-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "197ee821-5613-4857-b9d5-6d180e630564" (UID: "197ee821-5613-4857-b9d5-6d180e630564"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:24:13 crc kubenswrapper[4706]: I1125 12:24:13.351949 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/197ee821-5613-4857-b9d5-6d180e630564-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:24:13 crc kubenswrapper[4706]: I1125 12:24:13.351988 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/197ee821-5613-4857-b9d5-6d180e630564-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:24:13 crc kubenswrapper[4706]: I1125 12:24:13.351998 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg6nt\" (UniqueName: \"kubernetes.io/projected/197ee821-5613-4857-b9d5-6d180e630564-kube-api-access-zg6nt\") on node \"crc\" DevicePath \"\"" Nov 25 12:24:13 crc kubenswrapper[4706]: I1125 12:24:13.645921 4706 generic.go:334] "Generic (PLEG): container finished" podID="197ee821-5613-4857-b9d5-6d180e630564" containerID="fdb61aab7293906a1b5b2acfb417826035bc3b850cd8f5bcb2efef2f1c5f255c" exitCode=0 Nov 25 12:24:13 crc kubenswrapper[4706]: I1125 12:24:13.646960 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d5vz9" event={"ID":"197ee821-5613-4857-b9d5-6d180e630564","Type":"ContainerDied","Data":"fdb61aab7293906a1b5b2acfb417826035bc3b850cd8f5bcb2efef2f1c5f255c"} Nov 25 12:24:13 crc kubenswrapper[4706]: I1125 12:24:13.647001 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d5vz9" event={"ID":"197ee821-5613-4857-b9d5-6d180e630564","Type":"ContainerDied","Data":"acc3529fb896e15988a1baca75fce4223d955eb926c482d432b99f6d398b4df8"} Nov 25 12:24:13 crc kubenswrapper[4706]: I1125 12:24:13.647023 4706 scope.go:117] "RemoveContainer" containerID="fdb61aab7293906a1b5b2acfb417826035bc3b850cd8f5bcb2efef2f1c5f255c" Nov 25 12:24:13 crc kubenswrapper[4706]: I1125 12:24:13.647063 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d5vz9" Nov 25 12:24:13 crc kubenswrapper[4706]: I1125 12:24:13.673339 4706 scope.go:117] "RemoveContainer" containerID="692bce0071c74d74d55849b1651377cb75835d1975f62564d1e57b44ce19f06e" Nov 25 12:24:13 crc kubenswrapper[4706]: I1125 12:24:13.694459 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d5vz9"] Nov 25 12:24:13 crc kubenswrapper[4706]: I1125 12:24:13.702576 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d5vz9"] Nov 25 12:24:13 crc kubenswrapper[4706]: I1125 12:24:13.715399 4706 scope.go:117] "RemoveContainer" containerID="6dd442e2b0000863451436a7f66620899c6ce66ad714f93ecbbde4588ed1df08" Nov 25 12:24:13 crc kubenswrapper[4706]: I1125 12:24:13.743004 4706 scope.go:117] "RemoveContainer" containerID="fdb61aab7293906a1b5b2acfb417826035bc3b850cd8f5bcb2efef2f1c5f255c" Nov 25 12:24:13 crc kubenswrapper[4706]: E1125 12:24:13.743546 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdb61aab7293906a1b5b2acfb417826035bc3b850cd8f5bcb2efef2f1c5f255c\": container with ID starting with fdb61aab7293906a1b5b2acfb417826035bc3b850cd8f5bcb2efef2f1c5f255c not found: ID does not exist" containerID="fdb61aab7293906a1b5b2acfb417826035bc3b850cd8f5bcb2efef2f1c5f255c" Nov 25 12:24:13 crc kubenswrapper[4706]: I1125 12:24:13.743590 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdb61aab7293906a1b5b2acfb417826035bc3b850cd8f5bcb2efef2f1c5f255c"} err="failed to get container status \"fdb61aab7293906a1b5b2acfb417826035bc3b850cd8f5bcb2efef2f1c5f255c\": rpc error: code = NotFound desc = could not find container \"fdb61aab7293906a1b5b2acfb417826035bc3b850cd8f5bcb2efef2f1c5f255c\": container with ID starting with fdb61aab7293906a1b5b2acfb417826035bc3b850cd8f5bcb2efef2f1c5f255c not found: ID does not exist" Nov 25 12:24:13 crc kubenswrapper[4706]: I1125 12:24:13.743623 4706 scope.go:117] "RemoveContainer" containerID="692bce0071c74d74d55849b1651377cb75835d1975f62564d1e57b44ce19f06e" Nov 25 12:24:13 crc kubenswrapper[4706]: E1125 12:24:13.743892 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"692bce0071c74d74d55849b1651377cb75835d1975f62564d1e57b44ce19f06e\": container with ID starting with 692bce0071c74d74d55849b1651377cb75835d1975f62564d1e57b44ce19f06e not found: ID does not exist" containerID="692bce0071c74d74d55849b1651377cb75835d1975f62564d1e57b44ce19f06e" Nov 25 12:24:13 crc kubenswrapper[4706]: I1125 12:24:13.743924 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"692bce0071c74d74d55849b1651377cb75835d1975f62564d1e57b44ce19f06e"} err="failed to get container status \"692bce0071c74d74d55849b1651377cb75835d1975f62564d1e57b44ce19f06e\": rpc error: code = NotFound desc = could not find container \"692bce0071c74d74d55849b1651377cb75835d1975f62564d1e57b44ce19f06e\": container with ID starting with 692bce0071c74d74d55849b1651377cb75835d1975f62564d1e57b44ce19f06e not found: ID does not exist" Nov 25 12:24:13 crc kubenswrapper[4706]: I1125 12:24:13.743945 4706 scope.go:117] "RemoveContainer" containerID="6dd442e2b0000863451436a7f66620899c6ce66ad714f93ecbbde4588ed1df08" Nov 25 12:24:13 crc kubenswrapper[4706]: E1125 12:24:13.745597 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dd442e2b0000863451436a7f66620899c6ce66ad714f93ecbbde4588ed1df08\": container with ID starting with 6dd442e2b0000863451436a7f66620899c6ce66ad714f93ecbbde4588ed1df08 not found: ID does not exist" containerID="6dd442e2b0000863451436a7f66620899c6ce66ad714f93ecbbde4588ed1df08" Nov 25 12:24:13 crc kubenswrapper[4706]: I1125 12:24:13.745632 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dd442e2b0000863451436a7f66620899c6ce66ad714f93ecbbde4588ed1df08"} err="failed to get container status \"6dd442e2b0000863451436a7f66620899c6ce66ad714f93ecbbde4588ed1df08\": rpc error: code = NotFound desc = could not find container \"6dd442e2b0000863451436a7f66620899c6ce66ad714f93ecbbde4588ed1df08\": container with ID starting with 6dd442e2b0000863451436a7f66620899c6ce66ad714f93ecbbde4588ed1df08 not found: ID does not exist" Nov 25 12:24:13 crc kubenswrapper[4706]: I1125 12:24:13.932691 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="197ee821-5613-4857-b9d5-6d180e630564" path="/var/lib/kubelet/pods/197ee821-5613-4857-b9d5-6d180e630564/volumes" Nov 25 12:24:19 crc kubenswrapper[4706]: I1125 12:24:19.398104 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pfr69" Nov 25 12:24:19 crc kubenswrapper[4706]: I1125 12:24:19.398878 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pfr69" Nov 25 12:24:19 crc kubenswrapper[4706]: I1125 12:24:19.445114 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pfr69" Nov 25 12:24:19 crc kubenswrapper[4706]: I1125 12:24:19.740294 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pfr69" Nov 25 12:24:19 crc kubenswrapper[4706]: I1125 12:24:19.791647 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pfr69"] Nov 25 12:24:21 crc kubenswrapper[4706]: I1125 12:24:21.710813 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pfr69" podUID="de34b84a-9787-40ed-b4d6-ba2803bb62bb" containerName="registry-server" containerID="cri-o://4a59f346abf393a71e85e7f1fb279a5e05529621b3f66517ab6281de6737da4c" gracePeriod=2 Nov 25 12:24:22 crc kubenswrapper[4706]: I1125 12:24:22.726384 4706 generic.go:334] "Generic (PLEG): container finished" podID="de34b84a-9787-40ed-b4d6-ba2803bb62bb" containerID="4a59f346abf393a71e85e7f1fb279a5e05529621b3f66517ab6281de6737da4c" exitCode=0 Nov 25 12:24:22 crc kubenswrapper[4706]: I1125 12:24:22.726444 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pfr69" event={"ID":"de34b84a-9787-40ed-b4d6-ba2803bb62bb","Type":"ContainerDied","Data":"4a59f346abf393a71e85e7f1fb279a5e05529621b3f66517ab6281de6737da4c"} Nov 25 12:24:22 crc kubenswrapper[4706]: I1125 12:24:22.726866 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pfr69" event={"ID":"de34b84a-9787-40ed-b4d6-ba2803bb62bb","Type":"ContainerDied","Data":"036f3f49071f980d595ef5bab64adbe8a0c643ace364316dd23622ceb3d995c3"} Nov 25 12:24:22 crc kubenswrapper[4706]: I1125 12:24:22.726881 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="036f3f49071f980d595ef5bab64adbe8a0c643ace364316dd23622ceb3d995c3" Nov 25 12:24:22 crc kubenswrapper[4706]: I1125 12:24:22.735131 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pfr69" Nov 25 12:24:22 crc kubenswrapper[4706]: I1125 12:24:22.837908 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de34b84a-9787-40ed-b4d6-ba2803bb62bb-utilities\") pod \"de34b84a-9787-40ed-b4d6-ba2803bb62bb\" (UID: \"de34b84a-9787-40ed-b4d6-ba2803bb62bb\") " Nov 25 12:24:22 crc kubenswrapper[4706]: I1125 12:24:22.838199 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de34b84a-9787-40ed-b4d6-ba2803bb62bb-catalog-content\") pod \"de34b84a-9787-40ed-b4d6-ba2803bb62bb\" (UID: \"de34b84a-9787-40ed-b4d6-ba2803bb62bb\") " Nov 25 12:24:22 crc kubenswrapper[4706]: I1125 12:24:22.838333 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8dkh\" (UniqueName: \"kubernetes.io/projected/de34b84a-9787-40ed-b4d6-ba2803bb62bb-kube-api-access-k8dkh\") pod \"de34b84a-9787-40ed-b4d6-ba2803bb62bb\" (UID: \"de34b84a-9787-40ed-b4d6-ba2803bb62bb\") " Nov 25 12:24:22 crc kubenswrapper[4706]: I1125 12:24:22.838956 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de34b84a-9787-40ed-b4d6-ba2803bb62bb-utilities" (OuterVolumeSpecName: "utilities") pod "de34b84a-9787-40ed-b4d6-ba2803bb62bb" (UID: "de34b84a-9787-40ed-b4d6-ba2803bb62bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:24:22 crc kubenswrapper[4706]: I1125 12:24:22.843834 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de34b84a-9787-40ed-b4d6-ba2803bb62bb-kube-api-access-k8dkh" (OuterVolumeSpecName: "kube-api-access-k8dkh") pod "de34b84a-9787-40ed-b4d6-ba2803bb62bb" (UID: "de34b84a-9787-40ed-b4d6-ba2803bb62bb"). InnerVolumeSpecName "kube-api-access-k8dkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:24:22 crc kubenswrapper[4706]: I1125 12:24:22.890543 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de34b84a-9787-40ed-b4d6-ba2803bb62bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de34b84a-9787-40ed-b4d6-ba2803bb62bb" (UID: "de34b84a-9787-40ed-b4d6-ba2803bb62bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:24:22 crc kubenswrapper[4706]: I1125 12:24:22.940894 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de34b84a-9787-40ed-b4d6-ba2803bb62bb-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:24:22 crc kubenswrapper[4706]: I1125 12:24:22.940943 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de34b84a-9787-40ed-b4d6-ba2803bb62bb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:24:22 crc kubenswrapper[4706]: I1125 12:24:22.940957 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8dkh\" (UniqueName: \"kubernetes.io/projected/de34b84a-9787-40ed-b4d6-ba2803bb62bb-kube-api-access-k8dkh\") on node \"crc\" DevicePath \"\"" Nov 25 12:24:23 crc kubenswrapper[4706]: I1125 12:24:23.734390 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pfr69" Nov 25 12:24:23 crc kubenswrapper[4706]: I1125 12:24:23.774802 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pfr69"] Nov 25 12:24:23 crc kubenswrapper[4706]: I1125 12:24:23.785632 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pfr69"] Nov 25 12:24:23 crc kubenswrapper[4706]: I1125 12:24:23.932078 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de34b84a-9787-40ed-b4d6-ba2803bb62bb" path="/var/lib/kubelet/pods/de34b84a-9787-40ed-b4d6-ba2803bb62bb/volumes" Nov 25 12:25:26 crc kubenswrapper[4706]: I1125 12:25:26.320705 4706 generic.go:334] "Generic (PLEG): container finished" podID="10becdf1-f704-46ec-aee6-b4ef4fdbed09" containerID="b38f21b0605925748d0094bd561c43de984b44af1b7cf5ed6c8578e3086b9652" exitCode=0 Nov 25 12:25:26 crc kubenswrapper[4706]: I1125 12:25:26.320796 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj" event={"ID":"10becdf1-f704-46ec-aee6-b4ef4fdbed09","Type":"ContainerDied","Data":"b38f21b0605925748d0094bd561c43de984b44af1b7cf5ed6c8578e3086b9652"} Nov 25 12:25:27 crc kubenswrapper[4706]: I1125 12:25:27.714707 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj" Nov 25 12:25:27 crc kubenswrapper[4706]: I1125 12:25:27.878087 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/10becdf1-f704-46ec-aee6-b4ef4fdbed09-ssh-key\") pod \"10becdf1-f704-46ec-aee6-b4ef4fdbed09\" (UID: \"10becdf1-f704-46ec-aee6-b4ef4fdbed09\") " Nov 25 12:25:27 crc kubenswrapper[4706]: I1125 12:25:27.878218 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10becdf1-f704-46ec-aee6-b4ef4fdbed09-inventory\") pod \"10becdf1-f704-46ec-aee6-b4ef4fdbed09\" (UID: \"10becdf1-f704-46ec-aee6-b4ef4fdbed09\") " Nov 25 12:25:27 crc kubenswrapper[4706]: I1125 12:25:27.878283 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/10becdf1-f704-46ec-aee6-b4ef4fdbed09-ceilometer-compute-config-data-2\") pod \"10becdf1-f704-46ec-aee6-b4ef4fdbed09\" (UID: \"10becdf1-f704-46ec-aee6-b4ef4fdbed09\") " Nov 25 12:25:27 crc kubenswrapper[4706]: I1125 12:25:27.878413 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10becdf1-f704-46ec-aee6-b4ef4fdbed09-telemetry-combined-ca-bundle\") pod \"10becdf1-f704-46ec-aee6-b4ef4fdbed09\" (UID: \"10becdf1-f704-46ec-aee6-b4ef4fdbed09\") " Nov 25 12:25:27 crc kubenswrapper[4706]: I1125 12:25:27.878449 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68ghg\" (UniqueName: \"kubernetes.io/projected/10becdf1-f704-46ec-aee6-b4ef4fdbed09-kube-api-access-68ghg\") pod \"10becdf1-f704-46ec-aee6-b4ef4fdbed09\" (UID: \"10becdf1-f704-46ec-aee6-b4ef4fdbed09\") " Nov 25 12:25:27 crc kubenswrapper[4706]: I1125 12:25:27.878539 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/10becdf1-f704-46ec-aee6-b4ef4fdbed09-ceilometer-compute-config-data-0\") pod \"10becdf1-f704-46ec-aee6-b4ef4fdbed09\" (UID: \"10becdf1-f704-46ec-aee6-b4ef4fdbed09\") " Nov 25 12:25:27 crc kubenswrapper[4706]: I1125 12:25:27.878636 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/10becdf1-f704-46ec-aee6-b4ef4fdbed09-ceilometer-compute-config-data-1\") pod \"10becdf1-f704-46ec-aee6-b4ef4fdbed09\" (UID: \"10becdf1-f704-46ec-aee6-b4ef4fdbed09\") " Nov 25 12:25:27 crc kubenswrapper[4706]: I1125 12:25:27.883976 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10becdf1-f704-46ec-aee6-b4ef4fdbed09-kube-api-access-68ghg" (OuterVolumeSpecName: "kube-api-access-68ghg") pod "10becdf1-f704-46ec-aee6-b4ef4fdbed09" (UID: "10becdf1-f704-46ec-aee6-b4ef4fdbed09"). InnerVolumeSpecName "kube-api-access-68ghg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:25:27 crc kubenswrapper[4706]: I1125 12:25:27.887452 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10becdf1-f704-46ec-aee6-b4ef4fdbed09-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "10becdf1-f704-46ec-aee6-b4ef4fdbed09" (UID: "10becdf1-f704-46ec-aee6-b4ef4fdbed09"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:25:27 crc kubenswrapper[4706]: I1125 12:25:27.906440 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10becdf1-f704-46ec-aee6-b4ef4fdbed09-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "10becdf1-f704-46ec-aee6-b4ef4fdbed09" (UID: "10becdf1-f704-46ec-aee6-b4ef4fdbed09"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:25:27 crc kubenswrapper[4706]: I1125 12:25:27.907936 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10becdf1-f704-46ec-aee6-b4ef4fdbed09-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "10becdf1-f704-46ec-aee6-b4ef4fdbed09" (UID: "10becdf1-f704-46ec-aee6-b4ef4fdbed09"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:25:27 crc kubenswrapper[4706]: I1125 12:25:27.911015 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10becdf1-f704-46ec-aee6-b4ef4fdbed09-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "10becdf1-f704-46ec-aee6-b4ef4fdbed09" (UID: "10becdf1-f704-46ec-aee6-b4ef4fdbed09"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:25:27 crc kubenswrapper[4706]: I1125 12:25:27.914490 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10becdf1-f704-46ec-aee6-b4ef4fdbed09-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "10becdf1-f704-46ec-aee6-b4ef4fdbed09" (UID: "10becdf1-f704-46ec-aee6-b4ef4fdbed09"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:25:27 crc kubenswrapper[4706]: I1125 12:25:27.925459 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10becdf1-f704-46ec-aee6-b4ef4fdbed09-inventory" (OuterVolumeSpecName: "inventory") pod "10becdf1-f704-46ec-aee6-b4ef4fdbed09" (UID: "10becdf1-f704-46ec-aee6-b4ef4fdbed09"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:25:27 crc kubenswrapper[4706]: I1125 12:25:27.981673 4706 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/10becdf1-f704-46ec-aee6-b4ef4fdbed09-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 12:25:27 crc kubenswrapper[4706]: I1125 12:25:27.981912 4706 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10becdf1-f704-46ec-aee6-b4ef4fdbed09-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 12:25:27 crc kubenswrapper[4706]: I1125 12:25:27.981987 4706 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/10becdf1-f704-46ec-aee6-b4ef4fdbed09-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 25 12:25:27 crc kubenswrapper[4706]: I1125 12:25:27.982045 4706 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10becdf1-f704-46ec-aee6-b4ef4fdbed09-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:25:27 crc kubenswrapper[4706]: I1125 12:25:27.982100 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68ghg\" (UniqueName: \"kubernetes.io/projected/10becdf1-f704-46ec-aee6-b4ef4fdbed09-kube-api-access-68ghg\") on node \"crc\" DevicePath \"\"" Nov 25 12:25:27 crc kubenswrapper[4706]: I1125 12:25:27.982184 4706 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/10becdf1-f704-46ec-aee6-b4ef4fdbed09-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:25:27 crc kubenswrapper[4706]: I1125 12:25:27.982253 4706 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/10becdf1-f704-46ec-aee6-b4ef4fdbed09-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 25 12:25:28 crc kubenswrapper[4706]: I1125 12:25:28.339229 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj" event={"ID":"10becdf1-f704-46ec-aee6-b4ef4fdbed09","Type":"ContainerDied","Data":"3f54e438b90243e8d8c38d67d671c2f56b363d154095db4fe981bf547dcf35ff"} Nov 25 12:25:28 crc kubenswrapper[4706]: I1125 12:25:28.339342 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f54e438b90243e8d8c38d67d671c2f56b363d154095db4fe981bf547dcf35ff" Nov 25 12:25:28 crc kubenswrapper[4706]: I1125 12:25:28.339361 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj" Nov 25 12:25:31 crc kubenswrapper[4706]: I1125 12:25:31.125908 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:25:31 crc kubenswrapper[4706]: I1125 12:25:31.126515 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:26:01 crc kubenswrapper[4706]: I1125 12:26:01.124781 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:26:01 crc kubenswrapper[4706]: I1125 12:26:01.125363 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.569152 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 25 12:26:13 crc kubenswrapper[4706]: E1125 12:26:13.570313 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="197ee821-5613-4857-b9d5-6d180e630564" containerName="extract-content" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.570331 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="197ee821-5613-4857-b9d5-6d180e630564" containerName="extract-content" Nov 25 12:26:13 crc kubenswrapper[4706]: E1125 12:26:13.570379 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="197ee821-5613-4857-b9d5-6d180e630564" containerName="registry-server" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.570390 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="197ee821-5613-4857-b9d5-6d180e630564" containerName="registry-server" Nov 25 12:26:13 crc kubenswrapper[4706]: E1125 12:26:13.570416 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de34b84a-9787-40ed-b4d6-ba2803bb62bb" containerName="registry-server" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.570422 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="de34b84a-9787-40ed-b4d6-ba2803bb62bb" containerName="registry-server" Nov 25 12:26:13 crc kubenswrapper[4706]: E1125 12:26:13.570447 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="197ee821-5613-4857-b9d5-6d180e630564" containerName="extract-utilities" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.570455 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="197ee821-5613-4857-b9d5-6d180e630564" containerName="extract-utilities" Nov 25 12:26:13 crc kubenswrapper[4706]: E1125 12:26:13.570493 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de34b84a-9787-40ed-b4d6-ba2803bb62bb" containerName="extract-utilities" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.570500 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="de34b84a-9787-40ed-b4d6-ba2803bb62bb" containerName="extract-utilities" Nov 25 12:26:13 crc kubenswrapper[4706]: E1125 12:26:13.570518 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10becdf1-f704-46ec-aee6-b4ef4fdbed09" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.570525 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="10becdf1-f704-46ec-aee6-b4ef4fdbed09" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 25 12:26:13 crc kubenswrapper[4706]: E1125 12:26:13.570555 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de34b84a-9787-40ed-b4d6-ba2803bb62bb" containerName="extract-content" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.570563 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="de34b84a-9787-40ed-b4d6-ba2803bb62bb" containerName="extract-content" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.570993 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="10becdf1-f704-46ec-aee6-b4ef4fdbed09" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.571010 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="de34b84a-9787-40ed-b4d6-ba2803bb62bb" containerName="registry-server" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.571042 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="197ee821-5613-4857-b9d5-6d180e630564" containerName="registry-server" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.571931 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.587103 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.587147 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.587173 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-rlp4g" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.588207 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.595952 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.682239 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a3e38444-7907-4d48-bc07-b6b7dc4854a8-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") " pod="openstack/tempest-tests-tempest" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.682294 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a3e38444-7907-4d48-bc07-b6b7dc4854a8-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") " pod="openstack/tempest-tests-tempest" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.682358 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3e38444-7907-4d48-bc07-b6b7dc4854a8-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") " pod="openstack/tempest-tests-tempest" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.682373 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a3e38444-7907-4d48-bc07-b6b7dc4854a8-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") " pod="openstack/tempest-tests-tempest" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.682396 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") " pod="openstack/tempest-tests-tempest" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.682439 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a3e38444-7907-4d48-bc07-b6b7dc4854a8-config-data\") pod \"tempest-tests-tempest\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") " pod="openstack/tempest-tests-tempest" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.682468 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a3e38444-7907-4d48-bc07-b6b7dc4854a8-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") " pod="openstack/tempest-tests-tempest" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.682502 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a3e38444-7907-4d48-bc07-b6b7dc4854a8-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") " pod="openstack/tempest-tests-tempest" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.682532 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbxr8\" (UniqueName: \"kubernetes.io/projected/a3e38444-7907-4d48-bc07-b6b7dc4854a8-kube-api-access-mbxr8\") pod \"tempest-tests-tempest\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") " pod="openstack/tempest-tests-tempest" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.784290 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3e38444-7907-4d48-bc07-b6b7dc4854a8-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") " pod="openstack/tempest-tests-tempest" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.784371 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a3e38444-7907-4d48-bc07-b6b7dc4854a8-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") " pod="openstack/tempest-tests-tempest" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.784403 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") " pod="openstack/tempest-tests-tempest" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.784449 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a3e38444-7907-4d48-bc07-b6b7dc4854a8-config-data\") pod \"tempest-tests-tempest\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") " pod="openstack/tempest-tests-tempest" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.784482 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a3e38444-7907-4d48-bc07-b6b7dc4854a8-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") " pod="openstack/tempest-tests-tempest" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.784517 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a3e38444-7907-4d48-bc07-b6b7dc4854a8-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") " pod="openstack/tempest-tests-tempest" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.784544 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbxr8\" (UniqueName: \"kubernetes.io/projected/a3e38444-7907-4d48-bc07-b6b7dc4854a8-kube-api-access-mbxr8\") pod \"tempest-tests-tempest\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") " pod="openstack/tempest-tests-tempest" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.784589 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a3e38444-7907-4d48-bc07-b6b7dc4854a8-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") " pod="openstack/tempest-tests-tempest" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.784616 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a3e38444-7907-4d48-bc07-b6b7dc4854a8-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") " pod="openstack/tempest-tests-tempest" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.785472 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a3e38444-7907-4d48-bc07-b6b7dc4854a8-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") " pod="openstack/tempest-tests-tempest" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.785493 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a3e38444-7907-4d48-bc07-b6b7dc4854a8-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") " pod="openstack/tempest-tests-tempest" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.786119 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a3e38444-7907-4d48-bc07-b6b7dc4854a8-config-data\") pod \"tempest-tests-tempest\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") " pod="openstack/tempest-tests-tempest" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.786291 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a3e38444-7907-4d48-bc07-b6b7dc4854a8-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") " pod="openstack/tempest-tests-tempest" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.786654 4706 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/tempest-tests-tempest" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.791022 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a3e38444-7907-4d48-bc07-b6b7dc4854a8-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") " pod="openstack/tempest-tests-tempest" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.791240 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a3e38444-7907-4d48-bc07-b6b7dc4854a8-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") " pod="openstack/tempest-tests-tempest" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.800892 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3e38444-7907-4d48-bc07-b6b7dc4854a8-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") " pod="openstack/tempest-tests-tempest" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.802053 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbxr8\" (UniqueName: \"kubernetes.io/projected/a3e38444-7907-4d48-bc07-b6b7dc4854a8-kube-api-access-mbxr8\") pod \"tempest-tests-tempest\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") " pod="openstack/tempest-tests-tempest" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.814575 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") " pod="openstack/tempest-tests-tempest" Nov 25 12:26:13 crc kubenswrapper[4706]: I1125 12:26:13.904138 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 25 12:26:14 crc kubenswrapper[4706]: I1125 12:26:14.375434 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 25 12:26:14 crc kubenswrapper[4706]: I1125 12:26:14.386194 4706 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 12:26:14 crc kubenswrapper[4706]: I1125 12:26:14.778702 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a3e38444-7907-4d48-bc07-b6b7dc4854a8","Type":"ContainerStarted","Data":"304e049e15451afb6e4e76e9ee3fb232009c9bc52de57c9ce026badf7b3ad4b0"} Nov 25 12:26:31 crc kubenswrapper[4706]: I1125 12:26:31.125250 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:26:31 crc kubenswrapper[4706]: I1125 12:26:31.126041 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:26:31 crc kubenswrapper[4706]: I1125 12:26:31.126094 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" Nov 25 12:26:31 crc kubenswrapper[4706]: I1125 12:26:31.126799 4706 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d3fc72500aae4cf4d62aeac19c69abc79c3346f9d07b751d825f4be172d122de"} pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 12:26:31 crc kubenswrapper[4706]: I1125 12:26:31.126866 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" containerID="cri-o://d3fc72500aae4cf4d62aeac19c69abc79c3346f9d07b751d825f4be172d122de" gracePeriod=600 Nov 25 12:26:32 crc kubenswrapper[4706]: I1125 12:26:32.947480 4706 generic.go:334] "Generic (PLEG): container finished" podID="0930887a-320c-4506-8c9c-f94d6d64516a" containerID="d3fc72500aae4cf4d62aeac19c69abc79c3346f9d07b751d825f4be172d122de" exitCode=0 Nov 25 12:26:32 crc kubenswrapper[4706]: I1125 12:26:32.947967 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" event={"ID":"0930887a-320c-4506-8c9c-f94d6d64516a","Type":"ContainerDied","Data":"d3fc72500aae4cf4d62aeac19c69abc79c3346f9d07b751d825f4be172d122de"} Nov 25 12:26:32 crc kubenswrapper[4706]: I1125 12:26:32.948002 4706 scope.go:117] "RemoveContainer" containerID="b5c4a9b732ca8a1700914c594210046762bf19e4ab2732427a28f41c5179d529" Nov 25 12:26:44 crc kubenswrapper[4706]: E1125 12:26:44.152731 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:26:45 crc kubenswrapper[4706]: I1125 12:26:45.066031 4706 scope.go:117] "RemoveContainer" containerID="d3fc72500aae4cf4d62aeac19c69abc79c3346f9d07b751d825f4be172d122de" Nov 25 12:26:45 crc kubenswrapper[4706]: E1125 12:26:45.066376 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:26:45 crc kubenswrapper[4706]: E1125 12:26:45.190108 4706 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Nov 25 12:26:45 crc kubenswrapper[4706]: E1125 12:26:45.191236 4706 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mbxr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(a3e38444-7907-4d48-bc07-b6b7dc4854a8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 12:26:45 crc kubenswrapper[4706]: E1125 12:26:45.192616 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="a3e38444-7907-4d48-bc07-b6b7dc4854a8" Nov 25 12:26:46 crc kubenswrapper[4706]: E1125 12:26:46.091601 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="a3e38444-7907-4d48-bc07-b6b7dc4854a8" Nov 25 12:26:58 crc kubenswrapper[4706]: I1125 12:26:58.402118 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 25 12:26:59 crc kubenswrapper[4706]: I1125 12:26:59.922674 4706 scope.go:117] "RemoveContainer" containerID="d3fc72500aae4cf4d62aeac19c69abc79c3346f9d07b751d825f4be172d122de" Nov 25 12:26:59 crc kubenswrapper[4706]: E1125 12:26:59.923901 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:27:00 crc kubenswrapper[4706]: I1125 12:27:00.222504 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a3e38444-7907-4d48-bc07-b6b7dc4854a8","Type":"ContainerStarted","Data":"9116ffc360d20280abeb440476cbe11e03ad085af75254bc3df275ddd601ea7f"} Nov 25 12:27:00 crc kubenswrapper[4706]: I1125 12:27:00.249919 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.237731141 podStartE2EDuration="48.24989614s" podCreationTimestamp="2025-11-25 12:26:12 +0000 UTC" firstStartedPulling="2025-11-25 12:26:14.385989741 +0000 UTC m=+2983.300547122" lastFinishedPulling="2025-11-25 12:26:58.39815474 +0000 UTC m=+3027.312712121" observedRunningTime="2025-11-25 12:27:00.241927749 +0000 UTC m=+3029.156485130" watchObservedRunningTime="2025-11-25 12:27:00.24989614 +0000 UTC m=+3029.164453521" Nov 25 12:27:10 crc kubenswrapper[4706]: I1125 12:27:10.923500 4706 scope.go:117] "RemoveContainer" containerID="d3fc72500aae4cf4d62aeac19c69abc79c3346f9d07b751d825f4be172d122de" Nov 25 12:27:10 crc kubenswrapper[4706]: E1125 12:27:10.924693 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:27:17 crc kubenswrapper[4706]: I1125 12:27:17.164327 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l5t2q"] Nov 25 12:27:17 crc kubenswrapper[4706]: I1125 12:27:17.166815 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l5t2q" Nov 25 12:27:17 crc kubenswrapper[4706]: I1125 12:27:17.178166 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l5t2q"] Nov 25 12:27:17 crc kubenswrapper[4706]: I1125 12:27:17.338034 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46102c54-6972-4ac9-88da-4d33fdba2e91-catalog-content\") pod \"redhat-operators-l5t2q\" (UID: \"46102c54-6972-4ac9-88da-4d33fdba2e91\") " pod="openshift-marketplace/redhat-operators-l5t2q" Nov 25 12:27:17 crc kubenswrapper[4706]: I1125 12:27:17.338094 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46102c54-6972-4ac9-88da-4d33fdba2e91-utilities\") pod \"redhat-operators-l5t2q\" (UID: \"46102c54-6972-4ac9-88da-4d33fdba2e91\") " pod="openshift-marketplace/redhat-operators-l5t2q" Nov 25 12:27:17 crc kubenswrapper[4706]: I1125 12:27:17.338412 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8hr5\" (UniqueName: \"kubernetes.io/projected/46102c54-6972-4ac9-88da-4d33fdba2e91-kube-api-access-c8hr5\") pod \"redhat-operators-l5t2q\" (UID: \"46102c54-6972-4ac9-88da-4d33fdba2e91\") " pod="openshift-marketplace/redhat-operators-l5t2q" Nov 25 12:27:17 crc kubenswrapper[4706]: I1125 12:27:17.440763 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8hr5\" (UniqueName: \"kubernetes.io/projected/46102c54-6972-4ac9-88da-4d33fdba2e91-kube-api-access-c8hr5\") pod \"redhat-operators-l5t2q\" (UID: \"46102c54-6972-4ac9-88da-4d33fdba2e91\") " pod="openshift-marketplace/redhat-operators-l5t2q" Nov 25 12:27:17 crc kubenswrapper[4706]: I1125 12:27:17.440998 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46102c54-6972-4ac9-88da-4d33fdba2e91-catalog-content\") pod \"redhat-operators-l5t2q\" (UID: \"46102c54-6972-4ac9-88da-4d33fdba2e91\") " pod="openshift-marketplace/redhat-operators-l5t2q" Nov 25 12:27:17 crc kubenswrapper[4706]: I1125 12:27:17.441063 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46102c54-6972-4ac9-88da-4d33fdba2e91-utilities\") pod \"redhat-operators-l5t2q\" (UID: \"46102c54-6972-4ac9-88da-4d33fdba2e91\") " pod="openshift-marketplace/redhat-operators-l5t2q" Nov 25 12:27:17 crc kubenswrapper[4706]: I1125 12:27:17.441586 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46102c54-6972-4ac9-88da-4d33fdba2e91-catalog-content\") pod \"redhat-operators-l5t2q\" (UID: \"46102c54-6972-4ac9-88da-4d33fdba2e91\") " pod="openshift-marketplace/redhat-operators-l5t2q" Nov 25 12:27:17 crc kubenswrapper[4706]: I1125 12:27:17.441613 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46102c54-6972-4ac9-88da-4d33fdba2e91-utilities\") pod \"redhat-operators-l5t2q\" (UID: \"46102c54-6972-4ac9-88da-4d33fdba2e91\") " pod="openshift-marketplace/redhat-operators-l5t2q" Nov 25 12:27:17 crc kubenswrapper[4706]: I1125 12:27:17.466012 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8hr5\" (UniqueName: \"kubernetes.io/projected/46102c54-6972-4ac9-88da-4d33fdba2e91-kube-api-access-c8hr5\") pod \"redhat-operators-l5t2q\" (UID: \"46102c54-6972-4ac9-88da-4d33fdba2e91\") " pod="openshift-marketplace/redhat-operators-l5t2q" Nov 25 12:27:17 crc kubenswrapper[4706]: I1125 12:27:17.484028 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l5t2q" Nov 25 12:27:18 crc kubenswrapper[4706]: I1125 12:27:18.056939 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l5t2q"] Nov 25 12:27:18 crc kubenswrapper[4706]: I1125 12:27:18.388537 4706 generic.go:334] "Generic (PLEG): container finished" podID="46102c54-6972-4ac9-88da-4d33fdba2e91" containerID="701ec90ac8c9affdf394bf962a79afdc5efb3b8e902957320d6c7a3b183f8c08" exitCode=0 Nov 25 12:27:18 crc kubenswrapper[4706]: I1125 12:27:18.388621 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l5t2q" event={"ID":"46102c54-6972-4ac9-88da-4d33fdba2e91","Type":"ContainerDied","Data":"701ec90ac8c9affdf394bf962a79afdc5efb3b8e902957320d6c7a3b183f8c08"} Nov 25 12:27:18 crc kubenswrapper[4706]: I1125 12:27:18.388935 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l5t2q" event={"ID":"46102c54-6972-4ac9-88da-4d33fdba2e91","Type":"ContainerStarted","Data":"59bf8372bae1a25b7e3ca0564b3d125a5265938f2dd1b69f679d90e3ebaf582e"} Nov 25 12:27:19 crc kubenswrapper[4706]: I1125 12:27:19.408610 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l5t2q" event={"ID":"46102c54-6972-4ac9-88da-4d33fdba2e91","Type":"ContainerStarted","Data":"52b31b8938614a39752db7f65448c7e275ca0d20d912b260ace00749b6d6c371"} Nov 25 12:27:21 crc kubenswrapper[4706]: I1125 12:27:21.930936 4706 scope.go:117] "RemoveContainer" containerID="d3fc72500aae4cf4d62aeac19c69abc79c3346f9d07b751d825f4be172d122de" Nov 25 12:27:21 crc kubenswrapper[4706]: E1125 12:27:21.934661 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:27:24 crc kubenswrapper[4706]: I1125 12:27:24.455577 4706 generic.go:334] "Generic (PLEG): container finished" podID="46102c54-6972-4ac9-88da-4d33fdba2e91" containerID="52b31b8938614a39752db7f65448c7e275ca0d20d912b260ace00749b6d6c371" exitCode=0 Nov 25 12:27:24 crc kubenswrapper[4706]: I1125 12:27:24.455665 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l5t2q" event={"ID":"46102c54-6972-4ac9-88da-4d33fdba2e91","Type":"ContainerDied","Data":"52b31b8938614a39752db7f65448c7e275ca0d20d912b260ace00749b6d6c371"} Nov 25 12:27:25 crc kubenswrapper[4706]: I1125 12:27:25.468049 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l5t2q" event={"ID":"46102c54-6972-4ac9-88da-4d33fdba2e91","Type":"ContainerStarted","Data":"6915d278b6ea6ba07d0fe3baab980e1122681352eb6142ad2a8f0a409de4a10a"} Nov 25 12:27:25 crc kubenswrapper[4706]: I1125 12:27:25.493744 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l5t2q" podStartSLOduration=2.037137569 podStartE2EDuration="8.493719287s" podCreationTimestamp="2025-11-25 12:27:17 +0000 UTC" firstStartedPulling="2025-11-25 12:27:18.390475218 +0000 UTC m=+3047.305032609" lastFinishedPulling="2025-11-25 12:27:24.847056946 +0000 UTC m=+3053.761614327" observedRunningTime="2025-11-25 12:27:25.484896505 +0000 UTC m=+3054.399453896" watchObservedRunningTime="2025-11-25 12:27:25.493719287 +0000 UTC m=+3054.408276668" Nov 25 12:27:27 crc kubenswrapper[4706]: I1125 12:27:27.486172 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l5t2q" Nov 25 12:27:27 crc kubenswrapper[4706]: I1125 12:27:27.486796 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l5t2q" Nov 25 12:27:28 crc kubenswrapper[4706]: I1125 12:27:28.548540 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l5t2q" podUID="46102c54-6972-4ac9-88da-4d33fdba2e91" containerName="registry-server" probeResult="failure" output=< Nov 25 12:27:28 crc kubenswrapper[4706]: timeout: failed to connect service ":50051" within 1s Nov 25 12:27:28 crc kubenswrapper[4706]: > Nov 25 12:27:33 crc kubenswrapper[4706]: I1125 12:27:33.922850 4706 scope.go:117] "RemoveContainer" containerID="d3fc72500aae4cf4d62aeac19c69abc79c3346f9d07b751d825f4be172d122de" Nov 25 12:27:33 crc kubenswrapper[4706]: E1125 12:27:33.923915 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:27:37 crc kubenswrapper[4706]: I1125 12:27:37.539053 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l5t2q" Nov 25 12:27:37 crc kubenswrapper[4706]: I1125 12:27:37.594778 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l5t2q" Nov 25 12:27:37 crc kubenswrapper[4706]: I1125 12:27:37.790024 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l5t2q"] Nov 25 12:27:38 crc kubenswrapper[4706]: I1125 12:27:38.597347 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l5t2q" podUID="46102c54-6972-4ac9-88da-4d33fdba2e91" containerName="registry-server" containerID="cri-o://6915d278b6ea6ba07d0fe3baab980e1122681352eb6142ad2a8f0a409de4a10a" gracePeriod=2 Nov 25 12:27:39 crc kubenswrapper[4706]: I1125 12:27:39.183627 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l5t2q" Nov 25 12:27:39 crc kubenswrapper[4706]: I1125 12:27:39.252260 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46102c54-6972-4ac9-88da-4d33fdba2e91-utilities\") pod \"46102c54-6972-4ac9-88da-4d33fdba2e91\" (UID: \"46102c54-6972-4ac9-88da-4d33fdba2e91\") " Nov 25 12:27:39 crc kubenswrapper[4706]: I1125 12:27:39.252407 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46102c54-6972-4ac9-88da-4d33fdba2e91-catalog-content\") pod \"46102c54-6972-4ac9-88da-4d33fdba2e91\" (UID: \"46102c54-6972-4ac9-88da-4d33fdba2e91\") " Nov 25 12:27:39 crc kubenswrapper[4706]: I1125 12:27:39.252657 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8hr5\" (UniqueName: \"kubernetes.io/projected/46102c54-6972-4ac9-88da-4d33fdba2e91-kube-api-access-c8hr5\") pod \"46102c54-6972-4ac9-88da-4d33fdba2e91\" (UID: \"46102c54-6972-4ac9-88da-4d33fdba2e91\") " Nov 25 12:27:39 crc kubenswrapper[4706]: I1125 12:27:39.254915 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46102c54-6972-4ac9-88da-4d33fdba2e91-utilities" (OuterVolumeSpecName: "utilities") pod "46102c54-6972-4ac9-88da-4d33fdba2e91" (UID: "46102c54-6972-4ac9-88da-4d33fdba2e91"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:27:39 crc kubenswrapper[4706]: I1125 12:27:39.259312 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46102c54-6972-4ac9-88da-4d33fdba2e91-kube-api-access-c8hr5" (OuterVolumeSpecName: "kube-api-access-c8hr5") pod "46102c54-6972-4ac9-88da-4d33fdba2e91" (UID: "46102c54-6972-4ac9-88da-4d33fdba2e91"). InnerVolumeSpecName "kube-api-access-c8hr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:27:39 crc kubenswrapper[4706]: I1125 12:27:39.347024 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46102c54-6972-4ac9-88da-4d33fdba2e91-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46102c54-6972-4ac9-88da-4d33fdba2e91" (UID: "46102c54-6972-4ac9-88da-4d33fdba2e91"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:27:39 crc kubenswrapper[4706]: I1125 12:27:39.354496 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8hr5\" (UniqueName: \"kubernetes.io/projected/46102c54-6972-4ac9-88da-4d33fdba2e91-kube-api-access-c8hr5\") on node \"crc\" DevicePath \"\"" Nov 25 12:27:39 crc kubenswrapper[4706]: I1125 12:27:39.354521 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46102c54-6972-4ac9-88da-4d33fdba2e91-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:27:39 crc kubenswrapper[4706]: I1125 12:27:39.354531 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46102c54-6972-4ac9-88da-4d33fdba2e91-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:27:39 crc kubenswrapper[4706]: I1125 12:27:39.607822 4706 generic.go:334] "Generic (PLEG): container finished" podID="46102c54-6972-4ac9-88da-4d33fdba2e91" containerID="6915d278b6ea6ba07d0fe3baab980e1122681352eb6142ad2a8f0a409de4a10a" exitCode=0 Nov 25 12:27:39 crc kubenswrapper[4706]: I1125 12:27:39.607863 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l5t2q" Nov 25 12:27:39 crc kubenswrapper[4706]: I1125 12:27:39.607896 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l5t2q" event={"ID":"46102c54-6972-4ac9-88da-4d33fdba2e91","Type":"ContainerDied","Data":"6915d278b6ea6ba07d0fe3baab980e1122681352eb6142ad2a8f0a409de4a10a"} Nov 25 12:27:39 crc kubenswrapper[4706]: I1125 12:27:39.607979 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l5t2q" event={"ID":"46102c54-6972-4ac9-88da-4d33fdba2e91","Type":"ContainerDied","Data":"59bf8372bae1a25b7e3ca0564b3d125a5265938f2dd1b69f679d90e3ebaf582e"} Nov 25 12:27:39 crc kubenswrapper[4706]: I1125 12:27:39.608002 4706 scope.go:117] "RemoveContainer" containerID="6915d278b6ea6ba07d0fe3baab980e1122681352eb6142ad2a8f0a409de4a10a" Nov 25 12:27:39 crc kubenswrapper[4706]: I1125 12:27:39.638345 4706 scope.go:117] "RemoveContainer" containerID="52b31b8938614a39752db7f65448c7e275ca0d20d912b260ace00749b6d6c371" Nov 25 12:27:39 crc kubenswrapper[4706]: I1125 12:27:39.648765 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l5t2q"] Nov 25 12:27:39 crc kubenswrapper[4706]: I1125 12:27:39.665396 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l5t2q"] Nov 25 12:27:39 crc kubenswrapper[4706]: I1125 12:27:39.683185 4706 scope.go:117] "RemoveContainer" containerID="701ec90ac8c9affdf394bf962a79afdc5efb3b8e902957320d6c7a3b183f8c08" Nov 25 12:27:39 crc kubenswrapper[4706]: I1125 12:27:39.704537 4706 scope.go:117] "RemoveContainer" containerID="6915d278b6ea6ba07d0fe3baab980e1122681352eb6142ad2a8f0a409de4a10a" Nov 25 12:27:39 crc kubenswrapper[4706]: E1125 12:27:39.705085 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6915d278b6ea6ba07d0fe3baab980e1122681352eb6142ad2a8f0a409de4a10a\": container with ID starting with 6915d278b6ea6ba07d0fe3baab980e1122681352eb6142ad2a8f0a409de4a10a not found: ID does not exist" containerID="6915d278b6ea6ba07d0fe3baab980e1122681352eb6142ad2a8f0a409de4a10a" Nov 25 12:27:39 crc kubenswrapper[4706]: I1125 12:27:39.705138 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6915d278b6ea6ba07d0fe3baab980e1122681352eb6142ad2a8f0a409de4a10a"} err="failed to get container status \"6915d278b6ea6ba07d0fe3baab980e1122681352eb6142ad2a8f0a409de4a10a\": rpc error: code = NotFound desc = could not find container \"6915d278b6ea6ba07d0fe3baab980e1122681352eb6142ad2a8f0a409de4a10a\": container with ID starting with 6915d278b6ea6ba07d0fe3baab980e1122681352eb6142ad2a8f0a409de4a10a not found: ID does not exist" Nov 25 12:27:39 crc kubenswrapper[4706]: I1125 12:27:39.705170 4706 scope.go:117] "RemoveContainer" containerID="52b31b8938614a39752db7f65448c7e275ca0d20d912b260ace00749b6d6c371" Nov 25 12:27:39 crc kubenswrapper[4706]: E1125 12:27:39.705557 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52b31b8938614a39752db7f65448c7e275ca0d20d912b260ace00749b6d6c371\": container with ID starting with 52b31b8938614a39752db7f65448c7e275ca0d20d912b260ace00749b6d6c371 not found: ID does not exist" containerID="52b31b8938614a39752db7f65448c7e275ca0d20d912b260ace00749b6d6c371" Nov 25 12:27:39 crc kubenswrapper[4706]: I1125 12:27:39.705593 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52b31b8938614a39752db7f65448c7e275ca0d20d912b260ace00749b6d6c371"} err="failed to get container status \"52b31b8938614a39752db7f65448c7e275ca0d20d912b260ace00749b6d6c371\": rpc error: code = NotFound desc = could not find container \"52b31b8938614a39752db7f65448c7e275ca0d20d912b260ace00749b6d6c371\": container with ID starting with 52b31b8938614a39752db7f65448c7e275ca0d20d912b260ace00749b6d6c371 not found: ID does not exist" Nov 25 12:27:39 crc kubenswrapper[4706]: I1125 12:27:39.705610 4706 scope.go:117] "RemoveContainer" containerID="701ec90ac8c9affdf394bf962a79afdc5efb3b8e902957320d6c7a3b183f8c08" Nov 25 12:27:39 crc kubenswrapper[4706]: E1125 12:27:39.705920 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"701ec90ac8c9affdf394bf962a79afdc5efb3b8e902957320d6c7a3b183f8c08\": container with ID starting with 701ec90ac8c9affdf394bf962a79afdc5efb3b8e902957320d6c7a3b183f8c08 not found: ID does not exist" containerID="701ec90ac8c9affdf394bf962a79afdc5efb3b8e902957320d6c7a3b183f8c08" Nov 25 12:27:39 crc kubenswrapper[4706]: I1125 12:27:39.705957 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"701ec90ac8c9affdf394bf962a79afdc5efb3b8e902957320d6c7a3b183f8c08"} err="failed to get container status \"701ec90ac8c9affdf394bf962a79afdc5efb3b8e902957320d6c7a3b183f8c08\": rpc error: code = NotFound desc = could not find container \"701ec90ac8c9affdf394bf962a79afdc5efb3b8e902957320d6c7a3b183f8c08\": container with ID starting with 701ec90ac8c9affdf394bf962a79afdc5efb3b8e902957320d6c7a3b183f8c08 not found: ID does not exist" Nov 25 12:27:39 crc kubenswrapper[4706]: I1125 12:27:39.936990 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46102c54-6972-4ac9-88da-4d33fdba2e91" path="/var/lib/kubelet/pods/46102c54-6972-4ac9-88da-4d33fdba2e91/volumes" Nov 25 12:27:47 crc kubenswrapper[4706]: I1125 12:27:47.922429 4706 scope.go:117] "RemoveContainer" containerID="d3fc72500aae4cf4d62aeac19c69abc79c3346f9d07b751d825f4be172d122de" Nov 25 12:27:47 crc kubenswrapper[4706]: E1125 12:27:47.923401 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:28:02 crc kubenswrapper[4706]: I1125 12:28:02.922859 4706 scope.go:117] "RemoveContainer" containerID="d3fc72500aae4cf4d62aeac19c69abc79c3346f9d07b751d825f4be172d122de" Nov 25 12:28:02 crc kubenswrapper[4706]: E1125 12:28:02.923798 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:28:14 crc kubenswrapper[4706]: I1125 12:28:14.922536 4706 scope.go:117] "RemoveContainer" containerID="d3fc72500aae4cf4d62aeac19c69abc79c3346f9d07b751d825f4be172d122de" Nov 25 12:28:14 crc kubenswrapper[4706]: E1125 12:28:14.923582 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:28:27 crc kubenswrapper[4706]: I1125 12:28:27.922283 4706 scope.go:117] "RemoveContainer" containerID="d3fc72500aae4cf4d62aeac19c69abc79c3346f9d07b751d825f4be172d122de" Nov 25 12:28:27 crc kubenswrapper[4706]: E1125 12:28:27.923468 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:28:42 crc kubenswrapper[4706]: I1125 12:28:42.922327 4706 scope.go:117] "RemoveContainer" containerID="d3fc72500aae4cf4d62aeac19c69abc79c3346f9d07b751d825f4be172d122de" Nov 25 12:28:42 crc kubenswrapper[4706]: E1125 12:28:42.923006 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:28:54 crc kubenswrapper[4706]: I1125 12:28:54.922627 4706 scope.go:117] "RemoveContainer" containerID="d3fc72500aae4cf4d62aeac19c69abc79c3346f9d07b751d825f4be172d122de" Nov 25 12:28:54 crc kubenswrapper[4706]: E1125 12:28:54.923314 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:29:05 crc kubenswrapper[4706]: I1125 12:29:05.922325 4706 scope.go:117] "RemoveContainer" containerID="d3fc72500aae4cf4d62aeac19c69abc79c3346f9d07b751d825f4be172d122de" Nov 25 12:29:05 crc kubenswrapper[4706]: E1125 12:29:05.923073 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:29:18 crc kubenswrapper[4706]: I1125 12:29:18.922424 4706 scope.go:117] "RemoveContainer" containerID="d3fc72500aae4cf4d62aeac19c69abc79c3346f9d07b751d825f4be172d122de" Nov 25 12:29:18 crc kubenswrapper[4706]: E1125 12:29:18.923400 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:29:32 crc kubenswrapper[4706]: I1125 12:29:32.923098 4706 scope.go:117] "RemoveContainer" containerID="d3fc72500aae4cf4d62aeac19c69abc79c3346f9d07b751d825f4be172d122de" Nov 25 12:29:32 crc kubenswrapper[4706]: E1125 12:29:32.924243 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:29:47 crc kubenswrapper[4706]: I1125 12:29:47.922579 4706 scope.go:117] "RemoveContainer" containerID="d3fc72500aae4cf4d62aeac19c69abc79c3346f9d07b751d825f4be172d122de" Nov 25 12:29:47 crc kubenswrapper[4706]: E1125 12:29:47.923228 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:30:00 crc kubenswrapper[4706]: I1125 12:30:00.141056 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401230-hczjz"] Nov 25 12:30:00 crc kubenswrapper[4706]: E1125 12:30:00.142150 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46102c54-6972-4ac9-88da-4d33fdba2e91" containerName="extract-utilities" Nov 25 12:30:00 crc kubenswrapper[4706]: I1125 12:30:00.142260 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="46102c54-6972-4ac9-88da-4d33fdba2e91" containerName="extract-utilities" Nov 25 12:30:00 crc kubenswrapper[4706]: E1125 12:30:00.142286 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46102c54-6972-4ac9-88da-4d33fdba2e91" containerName="registry-server" Nov 25 12:30:00 crc kubenswrapper[4706]: I1125 12:30:00.142311 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="46102c54-6972-4ac9-88da-4d33fdba2e91" containerName="registry-server" Nov 25 12:30:00 crc kubenswrapper[4706]: E1125 12:30:00.142328 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46102c54-6972-4ac9-88da-4d33fdba2e91" containerName="extract-content" Nov 25 12:30:00 crc kubenswrapper[4706]: I1125 12:30:00.142336 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="46102c54-6972-4ac9-88da-4d33fdba2e91" containerName="extract-content" Nov 25 12:30:00 crc kubenswrapper[4706]: I1125 12:30:00.142613 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="46102c54-6972-4ac9-88da-4d33fdba2e91" containerName="registry-server" Nov 25 12:30:00 crc kubenswrapper[4706]: I1125 12:30:00.143385 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401230-hczjz" Nov 25 12:30:00 crc kubenswrapper[4706]: I1125 12:30:00.146368 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 12:30:00 crc kubenswrapper[4706]: I1125 12:30:00.146484 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 12:30:00 crc kubenswrapper[4706]: I1125 12:30:00.153810 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401230-hczjz"] Nov 25 12:30:00 crc kubenswrapper[4706]: I1125 12:30:00.277704 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/23a907bc-b1ab-4bb8-bd8a-9d812e70152d-secret-volume\") pod \"collect-profiles-29401230-hczjz\" (UID: \"23a907bc-b1ab-4bb8-bd8a-9d812e70152d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401230-hczjz" Nov 25 12:30:00 crc kubenswrapper[4706]: I1125 12:30:00.278264 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23a907bc-b1ab-4bb8-bd8a-9d812e70152d-config-volume\") pod \"collect-profiles-29401230-hczjz\" (UID: \"23a907bc-b1ab-4bb8-bd8a-9d812e70152d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401230-hczjz" Nov 25 12:30:00 crc kubenswrapper[4706]: I1125 12:30:00.278512 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m57g8\" (UniqueName: \"kubernetes.io/projected/23a907bc-b1ab-4bb8-bd8a-9d812e70152d-kube-api-access-m57g8\") pod \"collect-profiles-29401230-hczjz\" (UID: \"23a907bc-b1ab-4bb8-bd8a-9d812e70152d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401230-hczjz" Nov 25 12:30:00 crc kubenswrapper[4706]: I1125 12:30:00.379864 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23a907bc-b1ab-4bb8-bd8a-9d812e70152d-config-volume\") pod \"collect-profiles-29401230-hczjz\" (UID: \"23a907bc-b1ab-4bb8-bd8a-9d812e70152d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401230-hczjz" Nov 25 12:30:00 crc kubenswrapper[4706]: I1125 12:30:00.380113 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m57g8\" (UniqueName: \"kubernetes.io/projected/23a907bc-b1ab-4bb8-bd8a-9d812e70152d-kube-api-access-m57g8\") pod \"collect-profiles-29401230-hczjz\" (UID: \"23a907bc-b1ab-4bb8-bd8a-9d812e70152d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401230-hczjz" Nov 25 12:30:00 crc kubenswrapper[4706]: I1125 12:30:00.380197 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/23a907bc-b1ab-4bb8-bd8a-9d812e70152d-secret-volume\") pod \"collect-profiles-29401230-hczjz\" (UID: \"23a907bc-b1ab-4bb8-bd8a-9d812e70152d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401230-hczjz" Nov 25 12:30:00 crc kubenswrapper[4706]: I1125 12:30:00.381002 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23a907bc-b1ab-4bb8-bd8a-9d812e70152d-config-volume\") pod \"collect-profiles-29401230-hczjz\" (UID: \"23a907bc-b1ab-4bb8-bd8a-9d812e70152d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401230-hczjz" Nov 25 12:30:00 crc kubenswrapper[4706]: I1125 12:30:00.386023 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/23a907bc-b1ab-4bb8-bd8a-9d812e70152d-secret-volume\") pod \"collect-profiles-29401230-hczjz\" (UID: \"23a907bc-b1ab-4bb8-bd8a-9d812e70152d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401230-hczjz" Nov 25 12:30:00 crc kubenswrapper[4706]: I1125 12:30:00.398250 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m57g8\" (UniqueName: \"kubernetes.io/projected/23a907bc-b1ab-4bb8-bd8a-9d812e70152d-kube-api-access-m57g8\") pod \"collect-profiles-29401230-hczjz\" (UID: \"23a907bc-b1ab-4bb8-bd8a-9d812e70152d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401230-hczjz" Nov 25 12:30:00 crc kubenswrapper[4706]: I1125 12:30:00.473795 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401230-hczjz" Nov 25 12:30:01 crc kubenswrapper[4706]: I1125 12:30:01.305812 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401230-hczjz"] Nov 25 12:30:01 crc kubenswrapper[4706]: I1125 12:30:01.380513 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401230-hczjz" event={"ID":"23a907bc-b1ab-4bb8-bd8a-9d812e70152d","Type":"ContainerStarted","Data":"2a33e7cc3d6574d6d423426429f6c90f43e78588c6dd1d0dcfd101d0d8ea7f9e"} Nov 25 12:30:02 crc kubenswrapper[4706]: I1125 12:30:02.391073 4706 generic.go:334] "Generic (PLEG): container finished" podID="23a907bc-b1ab-4bb8-bd8a-9d812e70152d" containerID="7caa06be56227e4da37a0eee45b0f9987d4b462f24543d14f16a406d6dda3aa1" exitCode=0 Nov 25 12:30:02 crc kubenswrapper[4706]: I1125 12:30:02.391134 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401230-hczjz" event={"ID":"23a907bc-b1ab-4bb8-bd8a-9d812e70152d","Type":"ContainerDied","Data":"7caa06be56227e4da37a0eee45b0f9987d4b462f24543d14f16a406d6dda3aa1"} Nov 25 12:30:02 crc kubenswrapper[4706]: I1125 12:30:02.922695 4706 scope.go:117] "RemoveContainer" containerID="d3fc72500aae4cf4d62aeac19c69abc79c3346f9d07b751d825f4be172d122de" Nov 25 12:30:02 crc kubenswrapper[4706]: E1125 12:30:02.923030 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:30:03 crc kubenswrapper[4706]: I1125 12:30:03.779605 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401230-hczjz" Nov 25 12:30:03 crc kubenswrapper[4706]: I1125 12:30:03.957692 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23a907bc-b1ab-4bb8-bd8a-9d812e70152d-config-volume\") pod \"23a907bc-b1ab-4bb8-bd8a-9d812e70152d\" (UID: \"23a907bc-b1ab-4bb8-bd8a-9d812e70152d\") " Nov 25 12:30:03 crc kubenswrapper[4706]: I1125 12:30:03.957870 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m57g8\" (UniqueName: \"kubernetes.io/projected/23a907bc-b1ab-4bb8-bd8a-9d812e70152d-kube-api-access-m57g8\") pod \"23a907bc-b1ab-4bb8-bd8a-9d812e70152d\" (UID: \"23a907bc-b1ab-4bb8-bd8a-9d812e70152d\") " Nov 25 12:30:03 crc kubenswrapper[4706]: I1125 12:30:03.958148 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/23a907bc-b1ab-4bb8-bd8a-9d812e70152d-secret-volume\") pod \"23a907bc-b1ab-4bb8-bd8a-9d812e70152d\" (UID: \"23a907bc-b1ab-4bb8-bd8a-9d812e70152d\") " Nov 25 12:30:03 crc kubenswrapper[4706]: I1125 12:30:03.959386 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23a907bc-b1ab-4bb8-bd8a-9d812e70152d-config-volume" (OuterVolumeSpecName: "config-volume") pod "23a907bc-b1ab-4bb8-bd8a-9d812e70152d" (UID: "23a907bc-b1ab-4bb8-bd8a-9d812e70152d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:30:03 crc kubenswrapper[4706]: I1125 12:30:03.965395 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23a907bc-b1ab-4bb8-bd8a-9d812e70152d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "23a907bc-b1ab-4bb8-bd8a-9d812e70152d" (UID: "23a907bc-b1ab-4bb8-bd8a-9d812e70152d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:30:03 crc kubenswrapper[4706]: I1125 12:30:03.965430 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23a907bc-b1ab-4bb8-bd8a-9d812e70152d-kube-api-access-m57g8" (OuterVolumeSpecName: "kube-api-access-m57g8") pod "23a907bc-b1ab-4bb8-bd8a-9d812e70152d" (UID: "23a907bc-b1ab-4bb8-bd8a-9d812e70152d"). InnerVolumeSpecName "kube-api-access-m57g8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:30:04 crc kubenswrapper[4706]: I1125 12:30:04.060009 4706 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23a907bc-b1ab-4bb8-bd8a-9d812e70152d-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 12:30:04 crc kubenswrapper[4706]: I1125 12:30:04.060039 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m57g8\" (UniqueName: \"kubernetes.io/projected/23a907bc-b1ab-4bb8-bd8a-9d812e70152d-kube-api-access-m57g8\") on node \"crc\" DevicePath \"\"" Nov 25 12:30:04 crc kubenswrapper[4706]: I1125 12:30:04.060049 4706 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/23a907bc-b1ab-4bb8-bd8a-9d812e70152d-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 12:30:04 crc kubenswrapper[4706]: I1125 12:30:04.417024 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401230-hczjz" event={"ID":"23a907bc-b1ab-4bb8-bd8a-9d812e70152d","Type":"ContainerDied","Data":"2a33e7cc3d6574d6d423426429f6c90f43e78588c6dd1d0dcfd101d0d8ea7f9e"} Nov 25 12:30:04 crc kubenswrapper[4706]: I1125 12:30:04.417270 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a33e7cc3d6574d6d423426429f6c90f43e78588c6dd1d0dcfd101d0d8ea7f9e" Nov 25 12:30:04 crc kubenswrapper[4706]: I1125 12:30:04.417083 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401230-hczjz" Nov 25 12:30:04 crc kubenswrapper[4706]: I1125 12:30:04.861216 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401185-2mzsm"] Nov 25 12:30:04 crc kubenswrapper[4706]: I1125 12:30:04.870827 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401185-2mzsm"] Nov 25 12:30:05 crc kubenswrapper[4706]: I1125 12:30:05.978974 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44769f3f-2fd2-4cfa-8837-e723aabd08b4" path="/var/lib/kubelet/pods/44769f3f-2fd2-4cfa-8837-e723aabd08b4/volumes" Nov 25 12:30:14 crc kubenswrapper[4706]: I1125 12:30:14.922192 4706 scope.go:117] "RemoveContainer" containerID="d3fc72500aae4cf4d62aeac19c69abc79c3346f9d07b751d825f4be172d122de" Nov 25 12:30:14 crc kubenswrapper[4706]: E1125 12:30:14.923757 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:30:27 crc kubenswrapper[4706]: I1125 12:30:27.921880 4706 scope.go:117] "RemoveContainer" containerID="d3fc72500aae4cf4d62aeac19c69abc79c3346f9d07b751d825f4be172d122de" Nov 25 12:30:27 crc kubenswrapper[4706]: E1125 12:30:27.922731 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:30:37 crc kubenswrapper[4706]: I1125 12:30:37.404118 4706 scope.go:117] "RemoveContainer" containerID="a40a55308085320132d8d0b34d2a63de62fb8f2338932b8e3ab00f4a2cb666c3" Nov 25 12:30:37 crc kubenswrapper[4706]: I1125 12:30:37.441795 4706 scope.go:117] "RemoveContainer" containerID="4a59f346abf393a71e85e7f1fb279a5e05529621b3f66517ab6281de6737da4c" Nov 25 12:30:37 crc kubenswrapper[4706]: I1125 12:30:37.475693 4706 scope.go:117] "RemoveContainer" containerID="05f50853f28e786210d1b81136d591816b6ac6d1ac0d687a23933c18ce35e154" Nov 25 12:30:37 crc kubenswrapper[4706]: I1125 12:30:37.499831 4706 scope.go:117] "RemoveContainer" containerID="92b5550a0f77568e9075f7ea0ac5857235869f6f8cb590192bb2142d3d807aa7" Nov 25 12:30:40 crc kubenswrapper[4706]: I1125 12:30:40.922703 4706 scope.go:117] "RemoveContainer" containerID="d3fc72500aae4cf4d62aeac19c69abc79c3346f9d07b751d825f4be172d122de" Nov 25 12:30:40 crc kubenswrapper[4706]: E1125 12:30:40.923510 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:30:54 crc kubenswrapper[4706]: I1125 12:30:54.922410 4706 scope.go:117] "RemoveContainer" containerID="d3fc72500aae4cf4d62aeac19c69abc79c3346f9d07b751d825f4be172d122de" Nov 25 12:30:54 crc kubenswrapper[4706]: E1125 12:30:54.923218 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:31:05 crc kubenswrapper[4706]: I1125 12:31:05.922319 4706 scope.go:117] "RemoveContainer" containerID="d3fc72500aae4cf4d62aeac19c69abc79c3346f9d07b751d825f4be172d122de" Nov 25 12:31:05 crc kubenswrapper[4706]: E1125 12:31:05.923261 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:31:20 crc kubenswrapper[4706]: I1125 12:31:20.923255 4706 scope.go:117] "RemoveContainer" containerID="d3fc72500aae4cf4d62aeac19c69abc79c3346f9d07b751d825f4be172d122de" Nov 25 12:31:20 crc kubenswrapper[4706]: E1125 12:31:20.924031 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:31:34 crc kubenswrapper[4706]: I1125 12:31:34.923073 4706 scope.go:117] "RemoveContainer" containerID="d3fc72500aae4cf4d62aeac19c69abc79c3346f9d07b751d825f4be172d122de" Nov 25 12:31:35 crc kubenswrapper[4706]: I1125 12:31:35.687717 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" event={"ID":"0930887a-320c-4506-8c9c-f94d6d64516a","Type":"ContainerStarted","Data":"553914c0ba5726f4f1443ff74207fc011fc7a9c86c44d28b4aafc3ea2f6ab11b"} Nov 25 12:34:01 crc kubenswrapper[4706]: I1125 12:34:01.124813 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:34:01 crc kubenswrapper[4706]: I1125 12:34:01.125390 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:34:08 crc kubenswrapper[4706]: I1125 12:34:08.044494 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hvvn7"] Nov 25 12:34:08 crc kubenswrapper[4706]: E1125 12:34:08.045815 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23a907bc-b1ab-4bb8-bd8a-9d812e70152d" containerName="collect-profiles" Nov 25 12:34:08 crc kubenswrapper[4706]: I1125 12:34:08.045834 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="23a907bc-b1ab-4bb8-bd8a-9d812e70152d" containerName="collect-profiles" Nov 25 12:34:08 crc kubenswrapper[4706]: I1125 12:34:08.046080 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="23a907bc-b1ab-4bb8-bd8a-9d812e70152d" containerName="collect-profiles" Nov 25 12:34:08 crc kubenswrapper[4706]: I1125 12:34:08.047842 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hvvn7" Nov 25 12:34:08 crc kubenswrapper[4706]: I1125 12:34:08.064652 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvvn7"] Nov 25 12:34:08 crc kubenswrapper[4706]: I1125 12:34:08.235819 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f4a5481-a402-4ffa-9619-9649c9264659-utilities\") pod \"redhat-marketplace-hvvn7\" (UID: \"0f4a5481-a402-4ffa-9619-9649c9264659\") " pod="openshift-marketplace/redhat-marketplace-hvvn7" Nov 25 12:34:08 crc kubenswrapper[4706]: I1125 12:34:08.235917 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f4a5481-a402-4ffa-9619-9649c9264659-catalog-content\") pod \"redhat-marketplace-hvvn7\" (UID: \"0f4a5481-a402-4ffa-9619-9649c9264659\") " pod="openshift-marketplace/redhat-marketplace-hvvn7" Nov 25 12:34:08 crc kubenswrapper[4706]: I1125 12:34:08.236044 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfwsq\" (UniqueName: \"kubernetes.io/projected/0f4a5481-a402-4ffa-9619-9649c9264659-kube-api-access-jfwsq\") pod \"redhat-marketplace-hvvn7\" (UID: \"0f4a5481-a402-4ffa-9619-9649c9264659\") " pod="openshift-marketplace/redhat-marketplace-hvvn7" Nov 25 12:34:08 crc kubenswrapper[4706]: I1125 12:34:08.337769 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f4a5481-a402-4ffa-9619-9649c9264659-catalog-content\") pod \"redhat-marketplace-hvvn7\" (UID: \"0f4a5481-a402-4ffa-9619-9649c9264659\") " pod="openshift-marketplace/redhat-marketplace-hvvn7" Nov 25 12:34:08 crc kubenswrapper[4706]: I1125 12:34:08.337865 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfwsq\" (UniqueName: \"kubernetes.io/projected/0f4a5481-a402-4ffa-9619-9649c9264659-kube-api-access-jfwsq\") pod \"redhat-marketplace-hvvn7\" (UID: \"0f4a5481-a402-4ffa-9619-9649c9264659\") " pod="openshift-marketplace/redhat-marketplace-hvvn7" Nov 25 12:34:08 crc kubenswrapper[4706]: I1125 12:34:08.338006 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f4a5481-a402-4ffa-9619-9649c9264659-utilities\") pod \"redhat-marketplace-hvvn7\" (UID: \"0f4a5481-a402-4ffa-9619-9649c9264659\") " pod="openshift-marketplace/redhat-marketplace-hvvn7" Nov 25 12:34:08 crc kubenswrapper[4706]: I1125 12:34:08.338548 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f4a5481-a402-4ffa-9619-9649c9264659-catalog-content\") pod \"redhat-marketplace-hvvn7\" (UID: \"0f4a5481-a402-4ffa-9619-9649c9264659\") " pod="openshift-marketplace/redhat-marketplace-hvvn7" Nov 25 12:34:08 crc kubenswrapper[4706]: I1125 12:34:08.338870 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f4a5481-a402-4ffa-9619-9649c9264659-utilities\") pod \"redhat-marketplace-hvvn7\" (UID: \"0f4a5481-a402-4ffa-9619-9649c9264659\") " pod="openshift-marketplace/redhat-marketplace-hvvn7" Nov 25 12:34:08 crc kubenswrapper[4706]: I1125 12:34:08.360267 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfwsq\" (UniqueName: \"kubernetes.io/projected/0f4a5481-a402-4ffa-9619-9649c9264659-kube-api-access-jfwsq\") pod \"redhat-marketplace-hvvn7\" (UID: \"0f4a5481-a402-4ffa-9619-9649c9264659\") " pod="openshift-marketplace/redhat-marketplace-hvvn7" Nov 25 12:34:08 crc kubenswrapper[4706]: I1125 12:34:08.371004 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hvvn7" Nov 25 12:34:08 crc kubenswrapper[4706]: I1125 12:34:08.903024 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvvn7"] Nov 25 12:34:09 crc kubenswrapper[4706]: I1125 12:34:09.120823 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvvn7" event={"ID":"0f4a5481-a402-4ffa-9619-9649c9264659","Type":"ContainerStarted","Data":"1d2ef66d3c9c71c31817b7c42c7197d551e4ed9cc1168450657b0cbc99dd2f66"} Nov 25 12:34:09 crc kubenswrapper[4706]: I1125 12:34:09.120889 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvvn7" event={"ID":"0f4a5481-a402-4ffa-9619-9649c9264659","Type":"ContainerStarted","Data":"81f8d435b15f75fd8f2439263124a66a4e7c6c3ee22141b796f2fc1d9a6f5bce"} Nov 25 12:34:10 crc kubenswrapper[4706]: I1125 12:34:10.134700 4706 generic.go:334] "Generic (PLEG): container finished" podID="0f4a5481-a402-4ffa-9619-9649c9264659" containerID="1d2ef66d3c9c71c31817b7c42c7197d551e4ed9cc1168450657b0cbc99dd2f66" exitCode=0 Nov 25 12:34:10 crc kubenswrapper[4706]: I1125 12:34:10.134750 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvvn7" event={"ID":"0f4a5481-a402-4ffa-9619-9649c9264659","Type":"ContainerDied","Data":"1d2ef66d3c9c71c31817b7c42c7197d551e4ed9cc1168450657b0cbc99dd2f66"} Nov 25 12:34:10 crc kubenswrapper[4706]: I1125 12:34:10.136865 4706 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 12:34:11 crc kubenswrapper[4706]: I1125 12:34:11.150598 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvvn7" event={"ID":"0f4a5481-a402-4ffa-9619-9649c9264659","Type":"ContainerStarted","Data":"9be78ca98934c539d23dd3054a32c87d7e4ae4e2fb61a4c12eccd0422e3cb4e5"} Nov 25 12:34:12 crc kubenswrapper[4706]: I1125 12:34:12.163431 4706 generic.go:334] "Generic (PLEG): container finished" podID="0f4a5481-a402-4ffa-9619-9649c9264659" containerID="9be78ca98934c539d23dd3054a32c87d7e4ae4e2fb61a4c12eccd0422e3cb4e5" exitCode=0 Nov 25 12:34:12 crc kubenswrapper[4706]: I1125 12:34:12.163766 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvvn7" event={"ID":"0f4a5481-a402-4ffa-9619-9649c9264659","Type":"ContainerDied","Data":"9be78ca98934c539d23dd3054a32c87d7e4ae4e2fb61a4c12eccd0422e3cb4e5"} Nov 25 12:34:12 crc kubenswrapper[4706]: I1125 12:34:12.163793 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvvn7" event={"ID":"0f4a5481-a402-4ffa-9619-9649c9264659","Type":"ContainerStarted","Data":"b2382d3b17d7879f1a8dc7428d72b4aa32a8d33c2e48dd7a61c3c42c2b7d79a1"} Nov 25 12:34:12 crc kubenswrapper[4706]: I1125 12:34:12.188552 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hvvn7" podStartSLOduration=2.70277817 podStartE2EDuration="4.188519078s" podCreationTimestamp="2025-11-25 12:34:08 +0000 UTC" firstStartedPulling="2025-11-25 12:34:10.136462068 +0000 UTC m=+3459.051019449" lastFinishedPulling="2025-11-25 12:34:11.622202966 +0000 UTC m=+3460.536760357" observedRunningTime="2025-11-25 12:34:12.178323371 +0000 UTC m=+3461.092880752" watchObservedRunningTime="2025-11-25 12:34:12.188519078 +0000 UTC m=+3461.103076459" Nov 25 12:34:18 crc kubenswrapper[4706]: I1125 12:34:18.371441 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hvvn7" Nov 25 12:34:18 crc kubenswrapper[4706]: I1125 12:34:18.372419 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hvvn7" Nov 25 12:34:18 crc kubenswrapper[4706]: I1125 12:34:18.420536 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hvvn7" Nov 25 12:34:19 crc kubenswrapper[4706]: I1125 12:34:19.300157 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hvvn7" Nov 25 12:34:19 crc kubenswrapper[4706]: I1125 12:34:19.353247 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvvn7"] Nov 25 12:34:21 crc kubenswrapper[4706]: I1125 12:34:21.260007 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hvvn7" podUID="0f4a5481-a402-4ffa-9619-9649c9264659" containerName="registry-server" containerID="cri-o://b2382d3b17d7879f1a8dc7428d72b4aa32a8d33c2e48dd7a61c3c42c2b7d79a1" gracePeriod=2 Nov 25 12:34:21 crc kubenswrapper[4706]: I1125 12:34:21.790357 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hvvn7" Nov 25 12:34:21 crc kubenswrapper[4706]: I1125 12:34:21.814421 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfwsq\" (UniqueName: \"kubernetes.io/projected/0f4a5481-a402-4ffa-9619-9649c9264659-kube-api-access-jfwsq\") pod \"0f4a5481-a402-4ffa-9619-9649c9264659\" (UID: \"0f4a5481-a402-4ffa-9619-9649c9264659\") " Nov 25 12:34:21 crc kubenswrapper[4706]: I1125 12:34:21.814498 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f4a5481-a402-4ffa-9619-9649c9264659-utilities\") pod \"0f4a5481-a402-4ffa-9619-9649c9264659\" (UID: \"0f4a5481-a402-4ffa-9619-9649c9264659\") " Nov 25 12:34:21 crc kubenswrapper[4706]: I1125 12:34:21.815808 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f4a5481-a402-4ffa-9619-9649c9264659-utilities" (OuterVolumeSpecName: "utilities") pod "0f4a5481-a402-4ffa-9619-9649c9264659" (UID: "0f4a5481-a402-4ffa-9619-9649c9264659"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:34:21 crc kubenswrapper[4706]: I1125 12:34:21.822276 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f4a5481-a402-4ffa-9619-9649c9264659-kube-api-access-jfwsq" (OuterVolumeSpecName: "kube-api-access-jfwsq") pod "0f4a5481-a402-4ffa-9619-9649c9264659" (UID: "0f4a5481-a402-4ffa-9619-9649c9264659"). InnerVolumeSpecName "kube-api-access-jfwsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:34:21 crc kubenswrapper[4706]: I1125 12:34:21.915728 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f4a5481-a402-4ffa-9619-9649c9264659-catalog-content\") pod \"0f4a5481-a402-4ffa-9619-9649c9264659\" (UID: \"0f4a5481-a402-4ffa-9619-9649c9264659\") " Nov 25 12:34:21 crc kubenswrapper[4706]: I1125 12:34:21.916128 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfwsq\" (UniqueName: \"kubernetes.io/projected/0f4a5481-a402-4ffa-9619-9649c9264659-kube-api-access-jfwsq\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:21 crc kubenswrapper[4706]: I1125 12:34:21.916141 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f4a5481-a402-4ffa-9619-9649c9264659-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:21 crc kubenswrapper[4706]: I1125 12:34:21.936020 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f4a5481-a402-4ffa-9619-9649c9264659-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f4a5481-a402-4ffa-9619-9649c9264659" (UID: "0f4a5481-a402-4ffa-9619-9649c9264659"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:34:22 crc kubenswrapper[4706]: I1125 12:34:22.018594 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f4a5481-a402-4ffa-9619-9649c9264659-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:22 crc kubenswrapper[4706]: I1125 12:34:22.271560 4706 generic.go:334] "Generic (PLEG): container finished" podID="0f4a5481-a402-4ffa-9619-9649c9264659" containerID="b2382d3b17d7879f1a8dc7428d72b4aa32a8d33c2e48dd7a61c3c42c2b7d79a1" exitCode=0 Nov 25 12:34:22 crc kubenswrapper[4706]: I1125 12:34:22.271613 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvvn7" event={"ID":"0f4a5481-a402-4ffa-9619-9649c9264659","Type":"ContainerDied","Data":"b2382d3b17d7879f1a8dc7428d72b4aa32a8d33c2e48dd7a61c3c42c2b7d79a1"} Nov 25 12:34:22 crc kubenswrapper[4706]: I1125 12:34:22.271642 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvvn7" event={"ID":"0f4a5481-a402-4ffa-9619-9649c9264659","Type":"ContainerDied","Data":"81f8d435b15f75fd8f2439263124a66a4e7c6c3ee22141b796f2fc1d9a6f5bce"} Nov 25 12:34:22 crc kubenswrapper[4706]: I1125 12:34:22.271639 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hvvn7" Nov 25 12:34:22 crc kubenswrapper[4706]: I1125 12:34:22.271709 4706 scope.go:117] "RemoveContainer" containerID="b2382d3b17d7879f1a8dc7428d72b4aa32a8d33c2e48dd7a61c3c42c2b7d79a1" Nov 25 12:34:22 crc kubenswrapper[4706]: I1125 12:34:22.293749 4706 scope.go:117] "RemoveContainer" containerID="9be78ca98934c539d23dd3054a32c87d7e4ae4e2fb61a4c12eccd0422e3cb4e5" Nov 25 12:34:22 crc kubenswrapper[4706]: I1125 12:34:22.313845 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvvn7"] Nov 25 12:34:22 crc kubenswrapper[4706]: I1125 12:34:22.326010 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvvn7"] Nov 25 12:34:22 crc kubenswrapper[4706]: I1125 12:34:22.334426 4706 scope.go:117] "RemoveContainer" containerID="1d2ef66d3c9c71c31817b7c42c7197d551e4ed9cc1168450657b0cbc99dd2f66" Nov 25 12:34:22 crc kubenswrapper[4706]: I1125 12:34:22.378223 4706 scope.go:117] "RemoveContainer" containerID="b2382d3b17d7879f1a8dc7428d72b4aa32a8d33c2e48dd7a61c3c42c2b7d79a1" Nov 25 12:34:22 crc kubenswrapper[4706]: E1125 12:34:22.378833 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2382d3b17d7879f1a8dc7428d72b4aa32a8d33c2e48dd7a61c3c42c2b7d79a1\": container with ID starting with b2382d3b17d7879f1a8dc7428d72b4aa32a8d33c2e48dd7a61c3c42c2b7d79a1 not found: ID does not exist" containerID="b2382d3b17d7879f1a8dc7428d72b4aa32a8d33c2e48dd7a61c3c42c2b7d79a1" Nov 25 12:34:22 crc kubenswrapper[4706]: I1125 12:34:22.378884 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2382d3b17d7879f1a8dc7428d72b4aa32a8d33c2e48dd7a61c3c42c2b7d79a1"} err="failed to get container status \"b2382d3b17d7879f1a8dc7428d72b4aa32a8d33c2e48dd7a61c3c42c2b7d79a1\": rpc error: code = NotFound desc = could not find container \"b2382d3b17d7879f1a8dc7428d72b4aa32a8d33c2e48dd7a61c3c42c2b7d79a1\": container with ID starting with b2382d3b17d7879f1a8dc7428d72b4aa32a8d33c2e48dd7a61c3c42c2b7d79a1 not found: ID does not exist" Nov 25 12:34:22 crc kubenswrapper[4706]: I1125 12:34:22.378911 4706 scope.go:117] "RemoveContainer" containerID="9be78ca98934c539d23dd3054a32c87d7e4ae4e2fb61a4c12eccd0422e3cb4e5" Nov 25 12:34:22 crc kubenswrapper[4706]: E1125 12:34:22.379384 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9be78ca98934c539d23dd3054a32c87d7e4ae4e2fb61a4c12eccd0422e3cb4e5\": container with ID starting with 9be78ca98934c539d23dd3054a32c87d7e4ae4e2fb61a4c12eccd0422e3cb4e5 not found: ID does not exist" containerID="9be78ca98934c539d23dd3054a32c87d7e4ae4e2fb61a4c12eccd0422e3cb4e5" Nov 25 12:34:22 crc kubenswrapper[4706]: I1125 12:34:22.379439 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9be78ca98934c539d23dd3054a32c87d7e4ae4e2fb61a4c12eccd0422e3cb4e5"} err="failed to get container status \"9be78ca98934c539d23dd3054a32c87d7e4ae4e2fb61a4c12eccd0422e3cb4e5\": rpc error: code = NotFound desc = could not find container \"9be78ca98934c539d23dd3054a32c87d7e4ae4e2fb61a4c12eccd0422e3cb4e5\": container with ID starting with 9be78ca98934c539d23dd3054a32c87d7e4ae4e2fb61a4c12eccd0422e3cb4e5 not found: ID does not exist" Nov 25 12:34:22 crc kubenswrapper[4706]: I1125 12:34:22.379471 4706 scope.go:117] "RemoveContainer" containerID="1d2ef66d3c9c71c31817b7c42c7197d551e4ed9cc1168450657b0cbc99dd2f66" Nov 25 12:34:22 crc kubenswrapper[4706]: E1125 12:34:22.379883 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d2ef66d3c9c71c31817b7c42c7197d551e4ed9cc1168450657b0cbc99dd2f66\": container with ID starting with 1d2ef66d3c9c71c31817b7c42c7197d551e4ed9cc1168450657b0cbc99dd2f66 not found: ID does not exist" containerID="1d2ef66d3c9c71c31817b7c42c7197d551e4ed9cc1168450657b0cbc99dd2f66" Nov 25 12:34:22 crc kubenswrapper[4706]: I1125 12:34:22.379920 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d2ef66d3c9c71c31817b7c42c7197d551e4ed9cc1168450657b0cbc99dd2f66"} err="failed to get container status \"1d2ef66d3c9c71c31817b7c42c7197d551e4ed9cc1168450657b0cbc99dd2f66\": rpc error: code = NotFound desc = could not find container \"1d2ef66d3c9c71c31817b7c42c7197d551e4ed9cc1168450657b0cbc99dd2f66\": container with ID starting with 1d2ef66d3c9c71c31817b7c42c7197d551e4ed9cc1168450657b0cbc99dd2f66 not found: ID does not exist" Nov 25 12:34:23 crc kubenswrapper[4706]: I1125 12:34:23.933447 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f4a5481-a402-4ffa-9619-9649c9264659" path="/var/lib/kubelet/pods/0f4a5481-a402-4ffa-9619-9649c9264659/volumes" Nov 25 12:34:31 crc kubenswrapper[4706]: I1125 12:34:31.125421 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:34:31 crc kubenswrapper[4706]: I1125 12:34:31.126691 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:34:37 crc kubenswrapper[4706]: I1125 12:34:37.781698 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-79wlr"] Nov 25 12:34:37 crc kubenswrapper[4706]: E1125 12:34:37.782683 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f4a5481-a402-4ffa-9619-9649c9264659" containerName="extract-content" Nov 25 12:34:37 crc kubenswrapper[4706]: I1125 12:34:37.782699 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f4a5481-a402-4ffa-9619-9649c9264659" containerName="extract-content" Nov 25 12:34:37 crc kubenswrapper[4706]: E1125 12:34:37.782725 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f4a5481-a402-4ffa-9619-9649c9264659" containerName="registry-server" Nov 25 12:34:37 crc kubenswrapper[4706]: I1125 12:34:37.782734 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f4a5481-a402-4ffa-9619-9649c9264659" containerName="registry-server" Nov 25 12:34:37 crc kubenswrapper[4706]: E1125 12:34:37.782748 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f4a5481-a402-4ffa-9619-9649c9264659" containerName="extract-utilities" Nov 25 12:34:37 crc kubenswrapper[4706]: I1125 12:34:37.782755 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f4a5481-a402-4ffa-9619-9649c9264659" containerName="extract-utilities" Nov 25 12:34:37 crc kubenswrapper[4706]: I1125 12:34:37.782978 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f4a5481-a402-4ffa-9619-9649c9264659" containerName="registry-server" Nov 25 12:34:37 crc kubenswrapper[4706]: I1125 12:34:37.784747 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-79wlr" Nov 25 12:34:37 crc kubenswrapper[4706]: I1125 12:34:37.797081 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-79wlr"] Nov 25 12:34:37 crc kubenswrapper[4706]: I1125 12:34:37.933444 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41a512c3-74da-41d7-b63f-a03d9b505da6-catalog-content\") pod \"community-operators-79wlr\" (UID: \"41a512c3-74da-41d7-b63f-a03d9b505da6\") " pod="openshift-marketplace/community-operators-79wlr" Nov 25 12:34:37 crc kubenswrapper[4706]: I1125 12:34:37.933878 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41a512c3-74da-41d7-b63f-a03d9b505da6-utilities\") pod \"community-operators-79wlr\" (UID: \"41a512c3-74da-41d7-b63f-a03d9b505da6\") " pod="openshift-marketplace/community-operators-79wlr" Nov 25 12:34:37 crc kubenswrapper[4706]: I1125 12:34:37.934056 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2kqv\" (UniqueName: \"kubernetes.io/projected/41a512c3-74da-41d7-b63f-a03d9b505da6-kube-api-access-r2kqv\") pod \"community-operators-79wlr\" (UID: \"41a512c3-74da-41d7-b63f-a03d9b505da6\") " pod="openshift-marketplace/community-operators-79wlr" Nov 25 12:34:38 crc kubenswrapper[4706]: I1125 12:34:38.035908 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2kqv\" (UniqueName: \"kubernetes.io/projected/41a512c3-74da-41d7-b63f-a03d9b505da6-kube-api-access-r2kqv\") pod \"community-operators-79wlr\" (UID: \"41a512c3-74da-41d7-b63f-a03d9b505da6\") " pod="openshift-marketplace/community-operators-79wlr" Nov 25 12:34:38 crc kubenswrapper[4706]: I1125 12:34:38.035994 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41a512c3-74da-41d7-b63f-a03d9b505da6-catalog-content\") pod \"community-operators-79wlr\" (UID: \"41a512c3-74da-41d7-b63f-a03d9b505da6\") " pod="openshift-marketplace/community-operators-79wlr" Nov 25 12:34:38 crc kubenswrapper[4706]: I1125 12:34:38.036063 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41a512c3-74da-41d7-b63f-a03d9b505da6-utilities\") pod \"community-operators-79wlr\" (UID: \"41a512c3-74da-41d7-b63f-a03d9b505da6\") " pod="openshift-marketplace/community-operators-79wlr" Nov 25 12:34:38 crc kubenswrapper[4706]: I1125 12:34:38.036746 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41a512c3-74da-41d7-b63f-a03d9b505da6-catalog-content\") pod \"community-operators-79wlr\" (UID: \"41a512c3-74da-41d7-b63f-a03d9b505da6\") " pod="openshift-marketplace/community-operators-79wlr" Nov 25 12:34:38 crc kubenswrapper[4706]: I1125 12:34:38.036821 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41a512c3-74da-41d7-b63f-a03d9b505da6-utilities\") pod \"community-operators-79wlr\" (UID: \"41a512c3-74da-41d7-b63f-a03d9b505da6\") " pod="openshift-marketplace/community-operators-79wlr" Nov 25 12:34:38 crc kubenswrapper[4706]: I1125 12:34:38.062861 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2kqv\" (UniqueName: \"kubernetes.io/projected/41a512c3-74da-41d7-b63f-a03d9b505da6-kube-api-access-r2kqv\") pod \"community-operators-79wlr\" (UID: \"41a512c3-74da-41d7-b63f-a03d9b505da6\") " pod="openshift-marketplace/community-operators-79wlr" Nov 25 12:34:38 crc kubenswrapper[4706]: I1125 12:34:38.120202 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-79wlr" Nov 25 12:34:38 crc kubenswrapper[4706]: I1125 12:34:38.699280 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-79wlr"] Nov 25 12:34:39 crc kubenswrapper[4706]: I1125 12:34:39.445027 4706 generic.go:334] "Generic (PLEG): container finished" podID="41a512c3-74da-41d7-b63f-a03d9b505da6" containerID="6542b561e4a8f403c45f48c9e5a4d38fe616c91d8facec41009a2faf9afcbd4e" exitCode=0 Nov 25 12:34:39 crc kubenswrapper[4706]: I1125 12:34:39.445124 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-79wlr" event={"ID":"41a512c3-74da-41d7-b63f-a03d9b505da6","Type":"ContainerDied","Data":"6542b561e4a8f403c45f48c9e5a4d38fe616c91d8facec41009a2faf9afcbd4e"} Nov 25 12:34:39 crc kubenswrapper[4706]: I1125 12:34:39.445335 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-79wlr" event={"ID":"41a512c3-74da-41d7-b63f-a03d9b505da6","Type":"ContainerStarted","Data":"e5a9e6cfc461f10c342e0f56ad4eb3ba069d676516cea803874e2b070ce81429"} Nov 25 12:34:42 crc kubenswrapper[4706]: I1125 12:34:42.473857 4706 generic.go:334] "Generic (PLEG): container finished" podID="41a512c3-74da-41d7-b63f-a03d9b505da6" containerID="b59286a438f319a76023e653add9900e8e4a459aa6687d51e987be5aa4ec542c" exitCode=0 Nov 25 12:34:42 crc kubenswrapper[4706]: I1125 12:34:42.473972 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-79wlr" event={"ID":"41a512c3-74da-41d7-b63f-a03d9b505da6","Type":"ContainerDied","Data":"b59286a438f319a76023e653add9900e8e4a459aa6687d51e987be5aa4ec542c"} Nov 25 12:34:45 crc kubenswrapper[4706]: I1125 12:34:45.515758 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-79wlr" event={"ID":"41a512c3-74da-41d7-b63f-a03d9b505da6","Type":"ContainerStarted","Data":"6d266ee1d56cd924adc8e1461f2639d1f144583612ee7c9ef0ed43facaefc9e7"} Nov 25 12:34:45 crc kubenswrapper[4706]: I1125 12:34:45.535266 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-79wlr" podStartSLOduration=3.153126225 podStartE2EDuration="8.535247433s" podCreationTimestamp="2025-11-25 12:34:37 +0000 UTC" firstStartedPulling="2025-11-25 12:34:39.446825066 +0000 UTC m=+3488.361382437" lastFinishedPulling="2025-11-25 12:34:44.828946264 +0000 UTC m=+3493.743503645" observedRunningTime="2025-11-25 12:34:45.53079196 +0000 UTC m=+3494.445349331" watchObservedRunningTime="2025-11-25 12:34:45.535247433 +0000 UTC m=+3494.449804814" Nov 25 12:34:48 crc kubenswrapper[4706]: I1125 12:34:48.121127 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-79wlr" Nov 25 12:34:48 crc kubenswrapper[4706]: I1125 12:34:48.122601 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-79wlr" Nov 25 12:34:48 crc kubenswrapper[4706]: I1125 12:34:48.167388 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-79wlr" Nov 25 12:34:58 crc kubenswrapper[4706]: I1125 12:34:58.169625 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-79wlr" Nov 25 12:34:58 crc kubenswrapper[4706]: I1125 12:34:58.236179 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-79wlr"] Nov 25 12:34:58 crc kubenswrapper[4706]: I1125 12:34:58.647665 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-79wlr" podUID="41a512c3-74da-41d7-b63f-a03d9b505da6" containerName="registry-server" containerID="cri-o://6d266ee1d56cd924adc8e1461f2639d1f144583612ee7c9ef0ed43facaefc9e7" gracePeriod=2 Nov 25 12:34:59 crc kubenswrapper[4706]: I1125 12:34:59.213438 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-79wlr" Nov 25 12:34:59 crc kubenswrapper[4706]: I1125 12:34:59.260863 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2kqv\" (UniqueName: \"kubernetes.io/projected/41a512c3-74da-41d7-b63f-a03d9b505da6-kube-api-access-r2kqv\") pod \"41a512c3-74da-41d7-b63f-a03d9b505da6\" (UID: \"41a512c3-74da-41d7-b63f-a03d9b505da6\") " Nov 25 12:34:59 crc kubenswrapper[4706]: I1125 12:34:59.261037 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41a512c3-74da-41d7-b63f-a03d9b505da6-catalog-content\") pod \"41a512c3-74da-41d7-b63f-a03d9b505da6\" (UID: \"41a512c3-74da-41d7-b63f-a03d9b505da6\") " Nov 25 12:34:59 crc kubenswrapper[4706]: I1125 12:34:59.261103 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41a512c3-74da-41d7-b63f-a03d9b505da6-utilities\") pod \"41a512c3-74da-41d7-b63f-a03d9b505da6\" (UID: \"41a512c3-74da-41d7-b63f-a03d9b505da6\") " Nov 25 12:34:59 crc kubenswrapper[4706]: I1125 12:34:59.262249 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41a512c3-74da-41d7-b63f-a03d9b505da6-utilities" (OuterVolumeSpecName: "utilities") pod "41a512c3-74da-41d7-b63f-a03d9b505da6" (UID: "41a512c3-74da-41d7-b63f-a03d9b505da6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:34:59 crc kubenswrapper[4706]: I1125 12:34:59.272079 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41a512c3-74da-41d7-b63f-a03d9b505da6-kube-api-access-r2kqv" (OuterVolumeSpecName: "kube-api-access-r2kqv") pod "41a512c3-74da-41d7-b63f-a03d9b505da6" (UID: "41a512c3-74da-41d7-b63f-a03d9b505da6"). InnerVolumeSpecName "kube-api-access-r2kqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:34:59 crc kubenswrapper[4706]: I1125 12:34:59.312994 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41a512c3-74da-41d7-b63f-a03d9b505da6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41a512c3-74da-41d7-b63f-a03d9b505da6" (UID: "41a512c3-74da-41d7-b63f-a03d9b505da6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:34:59 crc kubenswrapper[4706]: I1125 12:34:59.363635 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2kqv\" (UniqueName: \"kubernetes.io/projected/41a512c3-74da-41d7-b63f-a03d9b505da6-kube-api-access-r2kqv\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:59 crc kubenswrapper[4706]: I1125 12:34:59.363679 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41a512c3-74da-41d7-b63f-a03d9b505da6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:59 crc kubenswrapper[4706]: I1125 12:34:59.363692 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41a512c3-74da-41d7-b63f-a03d9b505da6-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:59 crc kubenswrapper[4706]: I1125 12:34:59.656779 4706 generic.go:334] "Generic (PLEG): container finished" podID="41a512c3-74da-41d7-b63f-a03d9b505da6" containerID="6d266ee1d56cd924adc8e1461f2639d1f144583612ee7c9ef0ed43facaefc9e7" exitCode=0 Nov 25 12:34:59 crc kubenswrapper[4706]: I1125 12:34:59.656840 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-79wlr" Nov 25 12:34:59 crc kubenswrapper[4706]: I1125 12:34:59.656837 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-79wlr" event={"ID":"41a512c3-74da-41d7-b63f-a03d9b505da6","Type":"ContainerDied","Data":"6d266ee1d56cd924adc8e1461f2639d1f144583612ee7c9ef0ed43facaefc9e7"} Nov 25 12:34:59 crc kubenswrapper[4706]: I1125 12:34:59.656978 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-79wlr" event={"ID":"41a512c3-74da-41d7-b63f-a03d9b505da6","Type":"ContainerDied","Data":"e5a9e6cfc461f10c342e0f56ad4eb3ba069d676516cea803874e2b070ce81429"} Nov 25 12:34:59 crc kubenswrapper[4706]: I1125 12:34:59.657001 4706 scope.go:117] "RemoveContainer" containerID="6d266ee1d56cd924adc8e1461f2639d1f144583612ee7c9ef0ed43facaefc9e7" Nov 25 12:34:59 crc kubenswrapper[4706]: I1125 12:34:59.691832 4706 scope.go:117] "RemoveContainer" containerID="b59286a438f319a76023e653add9900e8e4a459aa6687d51e987be5aa4ec542c" Nov 25 12:34:59 crc kubenswrapper[4706]: I1125 12:34:59.695420 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-79wlr"] Nov 25 12:34:59 crc kubenswrapper[4706]: I1125 12:34:59.705319 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-79wlr"] Nov 25 12:34:59 crc kubenswrapper[4706]: I1125 12:34:59.713675 4706 scope.go:117] "RemoveContainer" containerID="6542b561e4a8f403c45f48c9e5a4d38fe616c91d8facec41009a2faf9afcbd4e" Nov 25 12:34:59 crc kubenswrapper[4706]: I1125 12:34:59.752892 4706 scope.go:117] "RemoveContainer" containerID="6d266ee1d56cd924adc8e1461f2639d1f144583612ee7c9ef0ed43facaefc9e7" Nov 25 12:34:59 crc kubenswrapper[4706]: E1125 12:34:59.753240 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d266ee1d56cd924adc8e1461f2639d1f144583612ee7c9ef0ed43facaefc9e7\": container with ID starting with 6d266ee1d56cd924adc8e1461f2639d1f144583612ee7c9ef0ed43facaefc9e7 not found: ID does not exist" containerID="6d266ee1d56cd924adc8e1461f2639d1f144583612ee7c9ef0ed43facaefc9e7" Nov 25 12:34:59 crc kubenswrapper[4706]: I1125 12:34:59.753277 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d266ee1d56cd924adc8e1461f2639d1f144583612ee7c9ef0ed43facaefc9e7"} err="failed to get container status \"6d266ee1d56cd924adc8e1461f2639d1f144583612ee7c9ef0ed43facaefc9e7\": rpc error: code = NotFound desc = could not find container \"6d266ee1d56cd924adc8e1461f2639d1f144583612ee7c9ef0ed43facaefc9e7\": container with ID starting with 6d266ee1d56cd924adc8e1461f2639d1f144583612ee7c9ef0ed43facaefc9e7 not found: ID does not exist" Nov 25 12:34:59 crc kubenswrapper[4706]: I1125 12:34:59.753329 4706 scope.go:117] "RemoveContainer" containerID="b59286a438f319a76023e653add9900e8e4a459aa6687d51e987be5aa4ec542c" Nov 25 12:34:59 crc kubenswrapper[4706]: E1125 12:34:59.753642 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b59286a438f319a76023e653add9900e8e4a459aa6687d51e987be5aa4ec542c\": container with ID starting with b59286a438f319a76023e653add9900e8e4a459aa6687d51e987be5aa4ec542c not found: ID does not exist" containerID="b59286a438f319a76023e653add9900e8e4a459aa6687d51e987be5aa4ec542c" Nov 25 12:34:59 crc kubenswrapper[4706]: I1125 12:34:59.753689 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b59286a438f319a76023e653add9900e8e4a459aa6687d51e987be5aa4ec542c"} err="failed to get container status \"b59286a438f319a76023e653add9900e8e4a459aa6687d51e987be5aa4ec542c\": rpc error: code = NotFound desc = could not find container \"b59286a438f319a76023e653add9900e8e4a459aa6687d51e987be5aa4ec542c\": container with ID starting with b59286a438f319a76023e653add9900e8e4a459aa6687d51e987be5aa4ec542c not found: ID does not exist" Nov 25 12:34:59 crc kubenswrapper[4706]: I1125 12:34:59.753715 4706 scope.go:117] "RemoveContainer" containerID="6542b561e4a8f403c45f48c9e5a4d38fe616c91d8facec41009a2faf9afcbd4e" Nov 25 12:34:59 crc kubenswrapper[4706]: E1125 12:34:59.754276 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6542b561e4a8f403c45f48c9e5a4d38fe616c91d8facec41009a2faf9afcbd4e\": container with ID starting with 6542b561e4a8f403c45f48c9e5a4d38fe616c91d8facec41009a2faf9afcbd4e not found: ID does not exist" containerID="6542b561e4a8f403c45f48c9e5a4d38fe616c91d8facec41009a2faf9afcbd4e" Nov 25 12:34:59 crc kubenswrapper[4706]: I1125 12:34:59.754350 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6542b561e4a8f403c45f48c9e5a4d38fe616c91d8facec41009a2faf9afcbd4e"} err="failed to get container status \"6542b561e4a8f403c45f48c9e5a4d38fe616c91d8facec41009a2faf9afcbd4e\": rpc error: code = NotFound desc = could not find container \"6542b561e4a8f403c45f48c9e5a4d38fe616c91d8facec41009a2faf9afcbd4e\": container with ID starting with 6542b561e4a8f403c45f48c9e5a4d38fe616c91d8facec41009a2faf9afcbd4e not found: ID does not exist" Nov 25 12:34:59 crc kubenswrapper[4706]: I1125 12:34:59.936525 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41a512c3-74da-41d7-b63f-a03d9b505da6" path="/var/lib/kubelet/pods/41a512c3-74da-41d7-b63f-a03d9b505da6/volumes" Nov 25 12:35:01 crc kubenswrapper[4706]: I1125 12:35:01.125366 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:35:01 crc kubenswrapper[4706]: I1125 12:35:01.125665 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:35:01 crc kubenswrapper[4706]: I1125 12:35:01.125711 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" Nov 25 12:35:01 crc kubenswrapper[4706]: I1125 12:35:01.126424 4706 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"553914c0ba5726f4f1443ff74207fc011fc7a9c86c44d28b4aafc3ea2f6ab11b"} pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 12:35:01 crc kubenswrapper[4706]: I1125 12:35:01.126504 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" containerID="cri-o://553914c0ba5726f4f1443ff74207fc011fc7a9c86c44d28b4aafc3ea2f6ab11b" gracePeriod=600 Nov 25 12:35:01 crc kubenswrapper[4706]: I1125 12:35:01.680598 4706 generic.go:334] "Generic (PLEG): container finished" podID="0930887a-320c-4506-8c9c-f94d6d64516a" containerID="553914c0ba5726f4f1443ff74207fc011fc7a9c86c44d28b4aafc3ea2f6ab11b" exitCode=0 Nov 25 12:35:01 crc kubenswrapper[4706]: I1125 12:35:01.680694 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" event={"ID":"0930887a-320c-4506-8c9c-f94d6d64516a","Type":"ContainerDied","Data":"553914c0ba5726f4f1443ff74207fc011fc7a9c86c44d28b4aafc3ea2f6ab11b"} Nov 25 12:35:01 crc kubenswrapper[4706]: I1125 12:35:01.680936 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" event={"ID":"0930887a-320c-4506-8c9c-f94d6d64516a","Type":"ContainerStarted","Data":"f7d4f2bc57b2d7499bb910a36c7d647ec55fac45e9295616e11685165a93deff"} Nov 25 12:35:01 crc kubenswrapper[4706]: I1125 12:35:01.680957 4706 scope.go:117] "RemoveContainer" containerID="d3fc72500aae4cf4d62aeac19c69abc79c3346f9d07b751d825f4be172d122de" Nov 25 12:35:26 crc kubenswrapper[4706]: I1125 12:35:26.011819 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4rkr8"] Nov 25 12:35:26 crc kubenswrapper[4706]: E1125 12:35:26.012719 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41a512c3-74da-41d7-b63f-a03d9b505da6" containerName="registry-server" Nov 25 12:35:26 crc kubenswrapper[4706]: I1125 12:35:26.012735 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="41a512c3-74da-41d7-b63f-a03d9b505da6" containerName="registry-server" Nov 25 12:35:26 crc kubenswrapper[4706]: E1125 12:35:26.012752 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41a512c3-74da-41d7-b63f-a03d9b505da6" containerName="extract-content" Nov 25 12:35:26 crc kubenswrapper[4706]: I1125 12:35:26.012759 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="41a512c3-74da-41d7-b63f-a03d9b505da6" containerName="extract-content" Nov 25 12:35:26 crc kubenswrapper[4706]: E1125 12:35:26.012781 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41a512c3-74da-41d7-b63f-a03d9b505da6" containerName="extract-utilities" Nov 25 12:35:26 crc kubenswrapper[4706]: I1125 12:35:26.012790 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="41a512c3-74da-41d7-b63f-a03d9b505da6" containerName="extract-utilities" Nov 25 12:35:26 crc kubenswrapper[4706]: I1125 12:35:26.013024 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="41a512c3-74da-41d7-b63f-a03d9b505da6" containerName="registry-server" Nov 25 12:35:26 crc kubenswrapper[4706]: I1125 12:35:26.014601 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4rkr8" Nov 25 12:35:26 crc kubenswrapper[4706]: I1125 12:35:26.034527 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4rkr8"] Nov 25 12:35:26 crc kubenswrapper[4706]: I1125 12:35:26.071202 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fshl8\" (UniqueName: \"kubernetes.io/projected/b371911d-1b0e-4db4-9991-41bc71216956-kube-api-access-fshl8\") pod \"certified-operators-4rkr8\" (UID: \"b371911d-1b0e-4db4-9991-41bc71216956\") " pod="openshift-marketplace/certified-operators-4rkr8" Nov 25 12:35:26 crc kubenswrapper[4706]: I1125 12:35:26.071599 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b371911d-1b0e-4db4-9991-41bc71216956-catalog-content\") pod \"certified-operators-4rkr8\" (UID: \"b371911d-1b0e-4db4-9991-41bc71216956\") " pod="openshift-marketplace/certified-operators-4rkr8" Nov 25 12:35:26 crc kubenswrapper[4706]: I1125 12:35:26.071738 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b371911d-1b0e-4db4-9991-41bc71216956-utilities\") pod \"certified-operators-4rkr8\" (UID: \"b371911d-1b0e-4db4-9991-41bc71216956\") " pod="openshift-marketplace/certified-operators-4rkr8" Nov 25 12:35:26 crc kubenswrapper[4706]: I1125 12:35:26.174093 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b371911d-1b0e-4db4-9991-41bc71216956-catalog-content\") pod \"certified-operators-4rkr8\" (UID: \"b371911d-1b0e-4db4-9991-41bc71216956\") " pod="openshift-marketplace/certified-operators-4rkr8" Nov 25 12:35:26 crc kubenswrapper[4706]: I1125 12:35:26.174413 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b371911d-1b0e-4db4-9991-41bc71216956-utilities\") pod \"certified-operators-4rkr8\" (UID: \"b371911d-1b0e-4db4-9991-41bc71216956\") " pod="openshift-marketplace/certified-operators-4rkr8" Nov 25 12:35:26 crc kubenswrapper[4706]: I1125 12:35:26.174726 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b371911d-1b0e-4db4-9991-41bc71216956-catalog-content\") pod \"certified-operators-4rkr8\" (UID: \"b371911d-1b0e-4db4-9991-41bc71216956\") " pod="openshift-marketplace/certified-operators-4rkr8" Nov 25 12:35:26 crc kubenswrapper[4706]: I1125 12:35:26.174731 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fshl8\" (UniqueName: \"kubernetes.io/projected/b371911d-1b0e-4db4-9991-41bc71216956-kube-api-access-fshl8\") pod \"certified-operators-4rkr8\" (UID: \"b371911d-1b0e-4db4-9991-41bc71216956\") " pod="openshift-marketplace/certified-operators-4rkr8" Nov 25 12:35:26 crc kubenswrapper[4706]: I1125 12:35:26.174877 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b371911d-1b0e-4db4-9991-41bc71216956-utilities\") pod \"certified-operators-4rkr8\" (UID: \"b371911d-1b0e-4db4-9991-41bc71216956\") " pod="openshift-marketplace/certified-operators-4rkr8" Nov 25 12:35:26 crc kubenswrapper[4706]: I1125 12:35:26.199208 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fshl8\" (UniqueName: \"kubernetes.io/projected/b371911d-1b0e-4db4-9991-41bc71216956-kube-api-access-fshl8\") pod \"certified-operators-4rkr8\" (UID: \"b371911d-1b0e-4db4-9991-41bc71216956\") " pod="openshift-marketplace/certified-operators-4rkr8" Nov 25 12:35:26 crc kubenswrapper[4706]: I1125 12:35:26.333045 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4rkr8" Nov 25 12:35:26 crc kubenswrapper[4706]: I1125 12:35:26.939859 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4rkr8"] Nov 25 12:35:27 crc kubenswrapper[4706]: I1125 12:35:27.913493 4706 generic.go:334] "Generic (PLEG): container finished" podID="b371911d-1b0e-4db4-9991-41bc71216956" containerID="368d0c6a21e6dd5588d4e9d1637ef2b082768da3908d95c6d2920a2dfe27b900" exitCode=0 Nov 25 12:35:27 crc kubenswrapper[4706]: I1125 12:35:27.913792 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rkr8" event={"ID":"b371911d-1b0e-4db4-9991-41bc71216956","Type":"ContainerDied","Data":"368d0c6a21e6dd5588d4e9d1637ef2b082768da3908d95c6d2920a2dfe27b900"} Nov 25 12:35:27 crc kubenswrapper[4706]: I1125 12:35:27.913824 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rkr8" event={"ID":"b371911d-1b0e-4db4-9991-41bc71216956","Type":"ContainerStarted","Data":"9b0ad4e5def3c8fc241b90f5c2361cfbe925dd83bdd3ee88c18a56049980ef1e"} Nov 25 12:35:28 crc kubenswrapper[4706]: I1125 12:35:28.935644 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rkr8" event={"ID":"b371911d-1b0e-4db4-9991-41bc71216956","Type":"ContainerStarted","Data":"d4ff4652171bddf9879bfde50098481c197068db6b3b4be565ba024ef836644a"} Nov 25 12:35:29 crc kubenswrapper[4706]: I1125 12:35:29.946545 4706 generic.go:334] "Generic (PLEG): container finished" podID="b371911d-1b0e-4db4-9991-41bc71216956" containerID="d4ff4652171bddf9879bfde50098481c197068db6b3b4be565ba024ef836644a" exitCode=0 Nov 25 12:35:29 crc kubenswrapper[4706]: I1125 12:35:29.946622 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rkr8" event={"ID":"b371911d-1b0e-4db4-9991-41bc71216956","Type":"ContainerDied","Data":"d4ff4652171bddf9879bfde50098481c197068db6b3b4be565ba024ef836644a"} Nov 25 12:35:30 crc kubenswrapper[4706]: I1125 12:35:30.957208 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rkr8" event={"ID":"b371911d-1b0e-4db4-9991-41bc71216956","Type":"ContainerStarted","Data":"5fe5bf9034dc0194fe1b558cb3ff88014c142b830458ee332e0ed7735a3a6be6"} Nov 25 12:35:30 crc kubenswrapper[4706]: I1125 12:35:30.986699 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4rkr8" podStartSLOduration=3.414114263 podStartE2EDuration="5.986680552s" podCreationTimestamp="2025-11-25 12:35:25 +0000 UTC" firstStartedPulling="2025-11-25 12:35:27.915710441 +0000 UTC m=+3536.830267822" lastFinishedPulling="2025-11-25 12:35:30.48827673 +0000 UTC m=+3539.402834111" observedRunningTime="2025-11-25 12:35:30.976840775 +0000 UTC m=+3539.891398176" watchObservedRunningTime="2025-11-25 12:35:30.986680552 +0000 UTC m=+3539.901237933" Nov 25 12:35:36 crc kubenswrapper[4706]: I1125 12:35:36.333382 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4rkr8" Nov 25 12:35:36 crc kubenswrapper[4706]: I1125 12:35:36.333772 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4rkr8" Nov 25 12:35:36 crc kubenswrapper[4706]: I1125 12:35:36.383657 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4rkr8" Nov 25 12:35:37 crc kubenswrapper[4706]: I1125 12:35:37.060009 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4rkr8" Nov 25 12:35:37 crc kubenswrapper[4706]: I1125 12:35:37.107267 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4rkr8"] Nov 25 12:35:39 crc kubenswrapper[4706]: I1125 12:35:39.027788 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4rkr8" podUID="b371911d-1b0e-4db4-9991-41bc71216956" containerName="registry-server" containerID="cri-o://5fe5bf9034dc0194fe1b558cb3ff88014c142b830458ee332e0ed7735a3a6be6" gracePeriod=2 Nov 25 12:35:39 crc kubenswrapper[4706]: I1125 12:35:39.556132 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4rkr8" Nov 25 12:35:39 crc kubenswrapper[4706]: I1125 12:35:39.693127 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b371911d-1b0e-4db4-9991-41bc71216956-utilities\") pod \"b371911d-1b0e-4db4-9991-41bc71216956\" (UID: \"b371911d-1b0e-4db4-9991-41bc71216956\") " Nov 25 12:35:39 crc kubenswrapper[4706]: I1125 12:35:39.693228 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fshl8\" (UniqueName: \"kubernetes.io/projected/b371911d-1b0e-4db4-9991-41bc71216956-kube-api-access-fshl8\") pod \"b371911d-1b0e-4db4-9991-41bc71216956\" (UID: \"b371911d-1b0e-4db4-9991-41bc71216956\") " Nov 25 12:35:39 crc kubenswrapper[4706]: I1125 12:35:39.693273 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b371911d-1b0e-4db4-9991-41bc71216956-catalog-content\") pod \"b371911d-1b0e-4db4-9991-41bc71216956\" (UID: \"b371911d-1b0e-4db4-9991-41bc71216956\") " Nov 25 12:35:39 crc kubenswrapper[4706]: I1125 12:35:39.694542 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b371911d-1b0e-4db4-9991-41bc71216956-utilities" (OuterVolumeSpecName: "utilities") pod "b371911d-1b0e-4db4-9991-41bc71216956" (UID: "b371911d-1b0e-4db4-9991-41bc71216956"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:35:39 crc kubenswrapper[4706]: I1125 12:35:39.703579 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b371911d-1b0e-4db4-9991-41bc71216956-kube-api-access-fshl8" (OuterVolumeSpecName: "kube-api-access-fshl8") pod "b371911d-1b0e-4db4-9991-41bc71216956" (UID: "b371911d-1b0e-4db4-9991-41bc71216956"). InnerVolumeSpecName "kube-api-access-fshl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:35:39 crc kubenswrapper[4706]: I1125 12:35:39.796046 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fshl8\" (UniqueName: \"kubernetes.io/projected/b371911d-1b0e-4db4-9991-41bc71216956-kube-api-access-fshl8\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:39 crc kubenswrapper[4706]: I1125 12:35:39.796075 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b371911d-1b0e-4db4-9991-41bc71216956-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:40 crc kubenswrapper[4706]: I1125 12:35:40.042517 4706 generic.go:334] "Generic (PLEG): container finished" podID="b371911d-1b0e-4db4-9991-41bc71216956" containerID="5fe5bf9034dc0194fe1b558cb3ff88014c142b830458ee332e0ed7735a3a6be6" exitCode=0 Nov 25 12:35:40 crc kubenswrapper[4706]: I1125 12:35:40.042600 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rkr8" event={"ID":"b371911d-1b0e-4db4-9991-41bc71216956","Type":"ContainerDied","Data":"5fe5bf9034dc0194fe1b558cb3ff88014c142b830458ee332e0ed7735a3a6be6"} Nov 25 12:35:40 crc kubenswrapper[4706]: I1125 12:35:40.042639 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rkr8" event={"ID":"b371911d-1b0e-4db4-9991-41bc71216956","Type":"ContainerDied","Data":"9b0ad4e5def3c8fc241b90f5c2361cfbe925dd83bdd3ee88c18a56049980ef1e"} Nov 25 12:35:40 crc kubenswrapper[4706]: I1125 12:35:40.042636 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4rkr8" Nov 25 12:35:40 crc kubenswrapper[4706]: I1125 12:35:40.042660 4706 scope.go:117] "RemoveContainer" containerID="5fe5bf9034dc0194fe1b558cb3ff88014c142b830458ee332e0ed7735a3a6be6" Nov 25 12:35:40 crc kubenswrapper[4706]: I1125 12:35:40.069681 4706 scope.go:117] "RemoveContainer" containerID="d4ff4652171bddf9879bfde50098481c197068db6b3b4be565ba024ef836644a" Nov 25 12:35:40 crc kubenswrapper[4706]: I1125 12:35:40.091915 4706 scope.go:117] "RemoveContainer" containerID="368d0c6a21e6dd5588d4e9d1637ef2b082768da3908d95c6d2920a2dfe27b900" Nov 25 12:35:40 crc kubenswrapper[4706]: I1125 12:35:40.151687 4706 scope.go:117] "RemoveContainer" containerID="5fe5bf9034dc0194fe1b558cb3ff88014c142b830458ee332e0ed7735a3a6be6" Nov 25 12:35:40 crc kubenswrapper[4706]: E1125 12:35:40.152548 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fe5bf9034dc0194fe1b558cb3ff88014c142b830458ee332e0ed7735a3a6be6\": container with ID starting with 5fe5bf9034dc0194fe1b558cb3ff88014c142b830458ee332e0ed7735a3a6be6 not found: ID does not exist" containerID="5fe5bf9034dc0194fe1b558cb3ff88014c142b830458ee332e0ed7735a3a6be6" Nov 25 12:35:40 crc kubenswrapper[4706]: I1125 12:35:40.152598 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fe5bf9034dc0194fe1b558cb3ff88014c142b830458ee332e0ed7735a3a6be6"} err="failed to get container status \"5fe5bf9034dc0194fe1b558cb3ff88014c142b830458ee332e0ed7735a3a6be6\": rpc error: code = NotFound desc = could not find container \"5fe5bf9034dc0194fe1b558cb3ff88014c142b830458ee332e0ed7735a3a6be6\": container with ID starting with 5fe5bf9034dc0194fe1b558cb3ff88014c142b830458ee332e0ed7735a3a6be6 not found: ID does not exist" Nov 25 12:35:40 crc kubenswrapper[4706]: I1125 12:35:40.152629 4706 scope.go:117] "RemoveContainer" containerID="d4ff4652171bddf9879bfde50098481c197068db6b3b4be565ba024ef836644a" Nov 25 12:35:40 crc kubenswrapper[4706]: E1125 12:35:40.153107 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4ff4652171bddf9879bfde50098481c197068db6b3b4be565ba024ef836644a\": container with ID starting with d4ff4652171bddf9879bfde50098481c197068db6b3b4be565ba024ef836644a not found: ID does not exist" containerID="d4ff4652171bddf9879bfde50098481c197068db6b3b4be565ba024ef836644a" Nov 25 12:35:40 crc kubenswrapper[4706]: I1125 12:35:40.153167 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4ff4652171bddf9879bfde50098481c197068db6b3b4be565ba024ef836644a"} err="failed to get container status \"d4ff4652171bddf9879bfde50098481c197068db6b3b4be565ba024ef836644a\": rpc error: code = NotFound desc = could not find container \"d4ff4652171bddf9879bfde50098481c197068db6b3b4be565ba024ef836644a\": container with ID starting with d4ff4652171bddf9879bfde50098481c197068db6b3b4be565ba024ef836644a not found: ID does not exist" Nov 25 12:35:40 crc kubenswrapper[4706]: I1125 12:35:40.153216 4706 scope.go:117] "RemoveContainer" containerID="368d0c6a21e6dd5588d4e9d1637ef2b082768da3908d95c6d2920a2dfe27b900" Nov 25 12:35:40 crc kubenswrapper[4706]: E1125 12:35:40.153712 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"368d0c6a21e6dd5588d4e9d1637ef2b082768da3908d95c6d2920a2dfe27b900\": container with ID starting with 368d0c6a21e6dd5588d4e9d1637ef2b082768da3908d95c6d2920a2dfe27b900 not found: ID does not exist" containerID="368d0c6a21e6dd5588d4e9d1637ef2b082768da3908d95c6d2920a2dfe27b900" Nov 25 12:35:40 crc kubenswrapper[4706]: I1125 12:35:40.153747 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"368d0c6a21e6dd5588d4e9d1637ef2b082768da3908d95c6d2920a2dfe27b900"} err="failed to get container status \"368d0c6a21e6dd5588d4e9d1637ef2b082768da3908d95c6d2920a2dfe27b900\": rpc error: code = NotFound desc = could not find container \"368d0c6a21e6dd5588d4e9d1637ef2b082768da3908d95c6d2920a2dfe27b900\": container with ID starting with 368d0c6a21e6dd5588d4e9d1637ef2b082768da3908d95c6d2920a2dfe27b900 not found: ID does not exist" Nov 25 12:35:40 crc kubenswrapper[4706]: I1125 12:35:40.552166 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b371911d-1b0e-4db4-9991-41bc71216956-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b371911d-1b0e-4db4-9991-41bc71216956" (UID: "b371911d-1b0e-4db4-9991-41bc71216956"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:35:40 crc kubenswrapper[4706]: I1125 12:35:40.611397 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b371911d-1b0e-4db4-9991-41bc71216956-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:40 crc kubenswrapper[4706]: I1125 12:35:40.680558 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4rkr8"] Nov 25 12:35:40 crc kubenswrapper[4706]: I1125 12:35:40.690251 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4rkr8"] Nov 25 12:35:41 crc kubenswrapper[4706]: I1125 12:35:41.947781 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b371911d-1b0e-4db4-9991-41bc71216956" path="/var/lib/kubelet/pods/b371911d-1b0e-4db4-9991-41bc71216956/volumes" Nov 25 12:37:01 crc kubenswrapper[4706]: I1125 12:37:01.125458 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:37:01 crc kubenswrapper[4706]: I1125 12:37:01.126017 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:37:25 crc kubenswrapper[4706]: I1125 12:37:25.721947 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pr8ng"] Nov 25 12:37:25 crc kubenswrapper[4706]: E1125 12:37:25.722811 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b371911d-1b0e-4db4-9991-41bc71216956" containerName="registry-server" Nov 25 12:37:25 crc kubenswrapper[4706]: I1125 12:37:25.722823 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="b371911d-1b0e-4db4-9991-41bc71216956" containerName="registry-server" Nov 25 12:37:25 crc kubenswrapper[4706]: E1125 12:37:25.722832 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b371911d-1b0e-4db4-9991-41bc71216956" containerName="extract-content" Nov 25 12:37:25 crc kubenswrapper[4706]: I1125 12:37:25.722838 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="b371911d-1b0e-4db4-9991-41bc71216956" containerName="extract-content" Nov 25 12:37:25 crc kubenswrapper[4706]: E1125 12:37:25.722855 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b371911d-1b0e-4db4-9991-41bc71216956" containerName="extract-utilities" Nov 25 12:37:25 crc kubenswrapper[4706]: I1125 12:37:25.722861 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="b371911d-1b0e-4db4-9991-41bc71216956" containerName="extract-utilities" Nov 25 12:37:25 crc kubenswrapper[4706]: I1125 12:37:25.723077 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="b371911d-1b0e-4db4-9991-41bc71216956" containerName="registry-server" Nov 25 12:37:25 crc kubenswrapper[4706]: I1125 12:37:25.743874 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pr8ng" Nov 25 12:37:25 crc kubenswrapper[4706]: I1125 12:37:25.766624 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pr8ng"] Nov 25 12:37:25 crc kubenswrapper[4706]: I1125 12:37:25.942624 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxl54\" (UniqueName: \"kubernetes.io/projected/b7729b92-c527-4193-a53a-9e99e161ffc8-kube-api-access-gxl54\") pod \"redhat-operators-pr8ng\" (UID: \"b7729b92-c527-4193-a53a-9e99e161ffc8\") " pod="openshift-marketplace/redhat-operators-pr8ng" Nov 25 12:37:25 crc kubenswrapper[4706]: I1125 12:37:25.943116 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7729b92-c527-4193-a53a-9e99e161ffc8-catalog-content\") pod \"redhat-operators-pr8ng\" (UID: \"b7729b92-c527-4193-a53a-9e99e161ffc8\") " pod="openshift-marketplace/redhat-operators-pr8ng" Nov 25 12:37:25 crc kubenswrapper[4706]: I1125 12:37:25.943394 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7729b92-c527-4193-a53a-9e99e161ffc8-utilities\") pod \"redhat-operators-pr8ng\" (UID: \"b7729b92-c527-4193-a53a-9e99e161ffc8\") " pod="openshift-marketplace/redhat-operators-pr8ng" Nov 25 12:37:26 crc kubenswrapper[4706]: I1125 12:37:26.045678 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7729b92-c527-4193-a53a-9e99e161ffc8-utilities\") pod \"redhat-operators-pr8ng\" (UID: \"b7729b92-c527-4193-a53a-9e99e161ffc8\") " pod="openshift-marketplace/redhat-operators-pr8ng" Nov 25 12:37:26 crc kubenswrapper[4706]: I1125 12:37:26.045848 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxl54\" (UniqueName: \"kubernetes.io/projected/b7729b92-c527-4193-a53a-9e99e161ffc8-kube-api-access-gxl54\") pod \"redhat-operators-pr8ng\" (UID: \"b7729b92-c527-4193-a53a-9e99e161ffc8\") " pod="openshift-marketplace/redhat-operators-pr8ng" Nov 25 12:37:26 crc kubenswrapper[4706]: I1125 12:37:26.045873 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7729b92-c527-4193-a53a-9e99e161ffc8-catalog-content\") pod \"redhat-operators-pr8ng\" (UID: \"b7729b92-c527-4193-a53a-9e99e161ffc8\") " pod="openshift-marketplace/redhat-operators-pr8ng" Nov 25 12:37:26 crc kubenswrapper[4706]: I1125 12:37:26.046387 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7729b92-c527-4193-a53a-9e99e161ffc8-catalog-content\") pod \"redhat-operators-pr8ng\" (UID: \"b7729b92-c527-4193-a53a-9e99e161ffc8\") " pod="openshift-marketplace/redhat-operators-pr8ng" Nov 25 12:37:26 crc kubenswrapper[4706]: I1125 12:37:26.046658 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7729b92-c527-4193-a53a-9e99e161ffc8-utilities\") pod \"redhat-operators-pr8ng\" (UID: \"b7729b92-c527-4193-a53a-9e99e161ffc8\") " pod="openshift-marketplace/redhat-operators-pr8ng" Nov 25 12:37:26 crc kubenswrapper[4706]: I1125 12:37:26.066466 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxl54\" (UniqueName: \"kubernetes.io/projected/b7729b92-c527-4193-a53a-9e99e161ffc8-kube-api-access-gxl54\") pod \"redhat-operators-pr8ng\" (UID: \"b7729b92-c527-4193-a53a-9e99e161ffc8\") " pod="openshift-marketplace/redhat-operators-pr8ng" Nov 25 12:37:26 crc kubenswrapper[4706]: I1125 12:37:26.079834 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pr8ng" Nov 25 12:37:26 crc kubenswrapper[4706]: I1125 12:37:26.564219 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pr8ng"] Nov 25 12:37:27 crc kubenswrapper[4706]: I1125 12:37:27.386703 4706 generic.go:334] "Generic (PLEG): container finished" podID="b7729b92-c527-4193-a53a-9e99e161ffc8" containerID="d60c6d37b49393df65d48ae822cca958f00b7f3bd0e509384cf40657eb64e4b1" exitCode=0 Nov 25 12:37:27 crc kubenswrapper[4706]: I1125 12:37:27.386778 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pr8ng" event={"ID":"b7729b92-c527-4193-a53a-9e99e161ffc8","Type":"ContainerDied","Data":"d60c6d37b49393df65d48ae822cca958f00b7f3bd0e509384cf40657eb64e4b1"} Nov 25 12:37:27 crc kubenswrapper[4706]: I1125 12:37:27.387002 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pr8ng" event={"ID":"b7729b92-c527-4193-a53a-9e99e161ffc8","Type":"ContainerStarted","Data":"14548d236014f6074c55470a658e3caa4996a7224ad86c15128a3a1a507dd135"} Nov 25 12:37:29 crc kubenswrapper[4706]: I1125 12:37:29.409142 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pr8ng" event={"ID":"b7729b92-c527-4193-a53a-9e99e161ffc8","Type":"ContainerStarted","Data":"75138948679cdfeb89875b2b576664a4a13ca0f05a89478ea5f98777ea2a42f6"} Nov 25 12:37:31 crc kubenswrapper[4706]: I1125 12:37:31.125479 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:37:31 crc kubenswrapper[4706]: I1125 12:37:31.125556 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:37:35 crc kubenswrapper[4706]: I1125 12:37:35.464692 4706 generic.go:334] "Generic (PLEG): container finished" podID="b7729b92-c527-4193-a53a-9e99e161ffc8" containerID="75138948679cdfeb89875b2b576664a4a13ca0f05a89478ea5f98777ea2a42f6" exitCode=0 Nov 25 12:37:35 crc kubenswrapper[4706]: I1125 12:37:35.464792 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pr8ng" event={"ID":"b7729b92-c527-4193-a53a-9e99e161ffc8","Type":"ContainerDied","Data":"75138948679cdfeb89875b2b576664a4a13ca0f05a89478ea5f98777ea2a42f6"} Nov 25 12:37:36 crc kubenswrapper[4706]: I1125 12:37:36.487689 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pr8ng" event={"ID":"b7729b92-c527-4193-a53a-9e99e161ffc8","Type":"ContainerStarted","Data":"6f74dda2c2f0164de514d03e65375e1b2b9ef816620eca175ad662faaf36ff72"} Nov 25 12:37:36 crc kubenswrapper[4706]: I1125 12:37:36.515444 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pr8ng" podStartSLOduration=2.8128686099999998 podStartE2EDuration="11.515422663s" podCreationTimestamp="2025-11-25 12:37:25 +0000 UTC" firstStartedPulling="2025-11-25 12:37:27.388721358 +0000 UTC m=+3656.303278739" lastFinishedPulling="2025-11-25 12:37:36.091275411 +0000 UTC m=+3665.005832792" observedRunningTime="2025-11-25 12:37:36.509426952 +0000 UTC m=+3665.423984333" watchObservedRunningTime="2025-11-25 12:37:36.515422663 +0000 UTC m=+3665.429980044" Nov 25 12:37:46 crc kubenswrapper[4706]: I1125 12:37:46.080761 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pr8ng" Nov 25 12:37:46 crc kubenswrapper[4706]: I1125 12:37:46.081326 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pr8ng" Nov 25 12:37:47 crc kubenswrapper[4706]: I1125 12:37:47.127640 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pr8ng" podUID="b7729b92-c527-4193-a53a-9e99e161ffc8" containerName="registry-server" probeResult="failure" output=< Nov 25 12:37:47 crc kubenswrapper[4706]: timeout: failed to connect service ":50051" within 1s Nov 25 12:37:47 crc kubenswrapper[4706]: > Nov 25 12:37:57 crc kubenswrapper[4706]: I1125 12:37:57.136892 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pr8ng" podUID="b7729b92-c527-4193-a53a-9e99e161ffc8" containerName="registry-server" probeResult="failure" output=< Nov 25 12:37:57 crc kubenswrapper[4706]: timeout: failed to connect service ":50051" within 1s Nov 25 12:37:57 crc kubenswrapper[4706]: > Nov 25 12:38:01 crc kubenswrapper[4706]: I1125 12:38:01.125576 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:38:01 crc kubenswrapper[4706]: I1125 12:38:01.126100 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:38:01 crc kubenswrapper[4706]: I1125 12:38:01.126157 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" Nov 25 12:38:01 crc kubenswrapper[4706]: I1125 12:38:01.127061 4706 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f7d4f2bc57b2d7499bb910a36c7d647ec55fac45e9295616e11685165a93deff"} pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 12:38:01 crc kubenswrapper[4706]: I1125 12:38:01.127116 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" containerID="cri-o://f7d4f2bc57b2d7499bb910a36c7d647ec55fac45e9295616e11685165a93deff" gracePeriod=600 Nov 25 12:38:01 crc kubenswrapper[4706]: E1125 12:38:01.255774 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:38:01 crc kubenswrapper[4706]: I1125 12:38:01.717613 4706 generic.go:334] "Generic (PLEG): container finished" podID="0930887a-320c-4506-8c9c-f94d6d64516a" containerID="f7d4f2bc57b2d7499bb910a36c7d647ec55fac45e9295616e11685165a93deff" exitCode=0 Nov 25 12:38:01 crc kubenswrapper[4706]: I1125 12:38:01.717887 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" event={"ID":"0930887a-320c-4506-8c9c-f94d6d64516a","Type":"ContainerDied","Data":"f7d4f2bc57b2d7499bb910a36c7d647ec55fac45e9295616e11685165a93deff"} Nov 25 12:38:01 crc kubenswrapper[4706]: I1125 12:38:01.717920 4706 scope.go:117] "RemoveContainer" containerID="553914c0ba5726f4f1443ff74207fc011fc7a9c86c44d28b4aafc3ea2f6ab11b" Nov 25 12:38:01 crc kubenswrapper[4706]: I1125 12:38:01.718514 4706 scope.go:117] "RemoveContainer" containerID="f7d4f2bc57b2d7499bb910a36c7d647ec55fac45e9295616e11685165a93deff" Nov 25 12:38:01 crc kubenswrapper[4706]: E1125 12:38:01.718800 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:38:06 crc kubenswrapper[4706]: I1125 12:38:06.135024 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pr8ng" Nov 25 12:38:06 crc kubenswrapper[4706]: I1125 12:38:06.199942 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pr8ng" Nov 25 12:38:06 crc kubenswrapper[4706]: I1125 12:38:06.385636 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pr8ng"] Nov 25 12:38:07 crc kubenswrapper[4706]: I1125 12:38:07.780771 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pr8ng" podUID="b7729b92-c527-4193-a53a-9e99e161ffc8" containerName="registry-server" containerID="cri-o://6f74dda2c2f0164de514d03e65375e1b2b9ef816620eca175ad662faaf36ff72" gracePeriod=2 Nov 25 12:38:08 crc kubenswrapper[4706]: I1125 12:38:08.292154 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pr8ng" Nov 25 12:38:08 crc kubenswrapper[4706]: I1125 12:38:08.372013 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7729b92-c527-4193-a53a-9e99e161ffc8-utilities\") pod \"b7729b92-c527-4193-a53a-9e99e161ffc8\" (UID: \"b7729b92-c527-4193-a53a-9e99e161ffc8\") " Nov 25 12:38:08 crc kubenswrapper[4706]: I1125 12:38:08.372209 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7729b92-c527-4193-a53a-9e99e161ffc8-catalog-content\") pod \"b7729b92-c527-4193-a53a-9e99e161ffc8\" (UID: \"b7729b92-c527-4193-a53a-9e99e161ffc8\") " Nov 25 12:38:08 crc kubenswrapper[4706]: I1125 12:38:08.372259 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxl54\" (UniqueName: \"kubernetes.io/projected/b7729b92-c527-4193-a53a-9e99e161ffc8-kube-api-access-gxl54\") pod \"b7729b92-c527-4193-a53a-9e99e161ffc8\" (UID: \"b7729b92-c527-4193-a53a-9e99e161ffc8\") " Nov 25 12:38:08 crc kubenswrapper[4706]: I1125 12:38:08.373247 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7729b92-c527-4193-a53a-9e99e161ffc8-utilities" (OuterVolumeSpecName: "utilities") pod "b7729b92-c527-4193-a53a-9e99e161ffc8" (UID: "b7729b92-c527-4193-a53a-9e99e161ffc8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:38:08 crc kubenswrapper[4706]: I1125 12:38:08.379851 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7729b92-c527-4193-a53a-9e99e161ffc8-kube-api-access-gxl54" (OuterVolumeSpecName: "kube-api-access-gxl54") pod "b7729b92-c527-4193-a53a-9e99e161ffc8" (UID: "b7729b92-c527-4193-a53a-9e99e161ffc8"). InnerVolumeSpecName "kube-api-access-gxl54". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:38:08 crc kubenswrapper[4706]: I1125 12:38:08.472670 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7729b92-c527-4193-a53a-9e99e161ffc8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b7729b92-c527-4193-a53a-9e99e161ffc8" (UID: "b7729b92-c527-4193-a53a-9e99e161ffc8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:38:08 crc kubenswrapper[4706]: I1125 12:38:08.474962 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7729b92-c527-4193-a53a-9e99e161ffc8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:38:08 crc kubenswrapper[4706]: I1125 12:38:08.475028 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxl54\" (UniqueName: \"kubernetes.io/projected/b7729b92-c527-4193-a53a-9e99e161ffc8-kube-api-access-gxl54\") on node \"crc\" DevicePath \"\"" Nov 25 12:38:08 crc kubenswrapper[4706]: I1125 12:38:08.475046 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7729b92-c527-4193-a53a-9e99e161ffc8-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:38:08 crc kubenswrapper[4706]: I1125 12:38:08.791126 4706 generic.go:334] "Generic (PLEG): container finished" podID="b7729b92-c527-4193-a53a-9e99e161ffc8" containerID="6f74dda2c2f0164de514d03e65375e1b2b9ef816620eca175ad662faaf36ff72" exitCode=0 Nov 25 12:38:08 crc kubenswrapper[4706]: I1125 12:38:08.791185 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pr8ng" event={"ID":"b7729b92-c527-4193-a53a-9e99e161ffc8","Type":"ContainerDied","Data":"6f74dda2c2f0164de514d03e65375e1b2b9ef816620eca175ad662faaf36ff72"} Nov 25 12:38:08 crc kubenswrapper[4706]: I1125 12:38:08.791215 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pr8ng" Nov 25 12:38:08 crc kubenswrapper[4706]: I1125 12:38:08.791234 4706 scope.go:117] "RemoveContainer" containerID="6f74dda2c2f0164de514d03e65375e1b2b9ef816620eca175ad662faaf36ff72" Nov 25 12:38:08 crc kubenswrapper[4706]: I1125 12:38:08.791222 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pr8ng" event={"ID":"b7729b92-c527-4193-a53a-9e99e161ffc8","Type":"ContainerDied","Data":"14548d236014f6074c55470a658e3caa4996a7224ad86c15128a3a1a507dd135"} Nov 25 12:38:08 crc kubenswrapper[4706]: I1125 12:38:08.816191 4706 scope.go:117] "RemoveContainer" containerID="75138948679cdfeb89875b2b576664a4a13ca0f05a89478ea5f98777ea2a42f6" Nov 25 12:38:08 crc kubenswrapper[4706]: I1125 12:38:08.826022 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pr8ng"] Nov 25 12:38:08 crc kubenswrapper[4706]: I1125 12:38:08.835690 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pr8ng"] Nov 25 12:38:08 crc kubenswrapper[4706]: I1125 12:38:08.842350 4706 scope.go:117] "RemoveContainer" containerID="d60c6d37b49393df65d48ae822cca958f00b7f3bd0e509384cf40657eb64e4b1" Nov 25 12:38:08 crc kubenswrapper[4706]: I1125 12:38:08.893606 4706 scope.go:117] "RemoveContainer" containerID="6f74dda2c2f0164de514d03e65375e1b2b9ef816620eca175ad662faaf36ff72" Nov 25 12:38:08 crc kubenswrapper[4706]: E1125 12:38:08.894046 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f74dda2c2f0164de514d03e65375e1b2b9ef816620eca175ad662faaf36ff72\": container with ID starting with 6f74dda2c2f0164de514d03e65375e1b2b9ef816620eca175ad662faaf36ff72 not found: ID does not exist" containerID="6f74dda2c2f0164de514d03e65375e1b2b9ef816620eca175ad662faaf36ff72" Nov 25 12:38:08 crc kubenswrapper[4706]: I1125 12:38:08.894105 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f74dda2c2f0164de514d03e65375e1b2b9ef816620eca175ad662faaf36ff72"} err="failed to get container status \"6f74dda2c2f0164de514d03e65375e1b2b9ef816620eca175ad662faaf36ff72\": rpc error: code = NotFound desc = could not find container \"6f74dda2c2f0164de514d03e65375e1b2b9ef816620eca175ad662faaf36ff72\": container with ID starting with 6f74dda2c2f0164de514d03e65375e1b2b9ef816620eca175ad662faaf36ff72 not found: ID does not exist" Nov 25 12:38:08 crc kubenswrapper[4706]: I1125 12:38:08.894142 4706 scope.go:117] "RemoveContainer" containerID="75138948679cdfeb89875b2b576664a4a13ca0f05a89478ea5f98777ea2a42f6" Nov 25 12:38:08 crc kubenswrapper[4706]: E1125 12:38:08.894567 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75138948679cdfeb89875b2b576664a4a13ca0f05a89478ea5f98777ea2a42f6\": container with ID starting with 75138948679cdfeb89875b2b576664a4a13ca0f05a89478ea5f98777ea2a42f6 not found: ID does not exist" containerID="75138948679cdfeb89875b2b576664a4a13ca0f05a89478ea5f98777ea2a42f6" Nov 25 12:38:08 crc kubenswrapper[4706]: I1125 12:38:08.894601 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75138948679cdfeb89875b2b576664a4a13ca0f05a89478ea5f98777ea2a42f6"} err="failed to get container status \"75138948679cdfeb89875b2b576664a4a13ca0f05a89478ea5f98777ea2a42f6\": rpc error: code = NotFound desc = could not find container \"75138948679cdfeb89875b2b576664a4a13ca0f05a89478ea5f98777ea2a42f6\": container with ID starting with 75138948679cdfeb89875b2b576664a4a13ca0f05a89478ea5f98777ea2a42f6 not found: ID does not exist" Nov 25 12:38:08 crc kubenswrapper[4706]: I1125 12:38:08.894624 4706 scope.go:117] "RemoveContainer" containerID="d60c6d37b49393df65d48ae822cca958f00b7f3bd0e509384cf40657eb64e4b1" Nov 25 12:38:08 crc kubenswrapper[4706]: E1125 12:38:08.895067 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d60c6d37b49393df65d48ae822cca958f00b7f3bd0e509384cf40657eb64e4b1\": container with ID starting with d60c6d37b49393df65d48ae822cca958f00b7f3bd0e509384cf40657eb64e4b1 not found: ID does not exist" containerID="d60c6d37b49393df65d48ae822cca958f00b7f3bd0e509384cf40657eb64e4b1" Nov 25 12:38:08 crc kubenswrapper[4706]: I1125 12:38:08.895151 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d60c6d37b49393df65d48ae822cca958f00b7f3bd0e509384cf40657eb64e4b1"} err="failed to get container status \"d60c6d37b49393df65d48ae822cca958f00b7f3bd0e509384cf40657eb64e4b1\": rpc error: code = NotFound desc = could not find container \"d60c6d37b49393df65d48ae822cca958f00b7f3bd0e509384cf40657eb64e4b1\": container with ID starting with d60c6d37b49393df65d48ae822cca958f00b7f3bd0e509384cf40657eb64e4b1 not found: ID does not exist" Nov 25 12:38:09 crc kubenswrapper[4706]: I1125 12:38:09.934939 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7729b92-c527-4193-a53a-9e99e161ffc8" path="/var/lib/kubelet/pods/b7729b92-c527-4193-a53a-9e99e161ffc8/volumes" Nov 25 12:38:12 crc kubenswrapper[4706]: I1125 12:38:12.923353 4706 scope.go:117] "RemoveContainer" containerID="f7d4f2bc57b2d7499bb910a36c7d647ec55fac45e9295616e11685165a93deff" Nov 25 12:38:12 crc kubenswrapper[4706]: E1125 12:38:12.924382 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:38:23 crc kubenswrapper[4706]: I1125 12:38:23.921944 4706 scope.go:117] "RemoveContainer" containerID="f7d4f2bc57b2d7499bb910a36c7d647ec55fac45e9295616e11685165a93deff" Nov 25 12:38:23 crc kubenswrapper[4706]: E1125 12:38:23.922785 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:38:35 crc kubenswrapper[4706]: I1125 12:38:35.923125 4706 scope.go:117] "RemoveContainer" containerID="f7d4f2bc57b2d7499bb910a36c7d647ec55fac45e9295616e11685165a93deff" Nov 25 12:38:35 crc kubenswrapper[4706]: E1125 12:38:35.924026 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:38:49 crc kubenswrapper[4706]: I1125 12:38:49.923373 4706 scope.go:117] "RemoveContainer" containerID="f7d4f2bc57b2d7499bb910a36c7d647ec55fac45e9295616e11685165a93deff" Nov 25 12:38:49 crc kubenswrapper[4706]: E1125 12:38:49.924367 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:39:02 crc kubenswrapper[4706]: I1125 12:39:02.922242 4706 scope.go:117] "RemoveContainer" containerID="f7d4f2bc57b2d7499bb910a36c7d647ec55fac45e9295616e11685165a93deff" Nov 25 12:39:02 crc kubenswrapper[4706]: E1125 12:39:02.923057 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:39:10 crc kubenswrapper[4706]: I1125 12:39:10.567696 4706 generic.go:334] "Generic (PLEG): container finished" podID="a3e38444-7907-4d48-bc07-b6b7dc4854a8" containerID="9116ffc360d20280abeb440476cbe11e03ad085af75254bc3df275ddd601ea7f" exitCode=0 Nov 25 12:39:10 crc kubenswrapper[4706]: I1125 12:39:10.567819 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a3e38444-7907-4d48-bc07-b6b7dc4854a8","Type":"ContainerDied","Data":"9116ffc360d20280abeb440476cbe11e03ad085af75254bc3df275ddd601ea7f"} Nov 25 12:39:12 crc kubenswrapper[4706]: I1125 12:39:11.939039 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 25 12:39:12 crc kubenswrapper[4706]: I1125 12:39:12.111378 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3e38444-7907-4d48-bc07-b6b7dc4854a8-ssh-key\") pod \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") " Nov 25 12:39:12 crc kubenswrapper[4706]: I1125 12:39:12.111428 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbxr8\" (UniqueName: \"kubernetes.io/projected/a3e38444-7907-4d48-bc07-b6b7dc4854a8-kube-api-access-mbxr8\") pod \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") " Nov 25 12:39:12 crc kubenswrapper[4706]: I1125 12:39:12.111500 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a3e38444-7907-4d48-bc07-b6b7dc4854a8-openstack-config-secret\") pod \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") " Nov 25 12:39:12 crc kubenswrapper[4706]: I1125 12:39:12.111544 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") " Nov 25 12:39:12 crc kubenswrapper[4706]: I1125 12:39:12.111572 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a3e38444-7907-4d48-bc07-b6b7dc4854a8-openstack-config\") pod \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") " Nov 25 12:39:12 crc kubenswrapper[4706]: I1125 12:39:12.111599 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a3e38444-7907-4d48-bc07-b6b7dc4854a8-test-operator-ephemeral-workdir\") pod \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") " Nov 25 12:39:12 crc kubenswrapper[4706]: I1125 12:39:12.111657 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a3e38444-7907-4d48-bc07-b6b7dc4854a8-config-data\") pod \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") " Nov 25 12:39:12 crc kubenswrapper[4706]: I1125 12:39:12.111673 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a3e38444-7907-4d48-bc07-b6b7dc4854a8-test-operator-ephemeral-temporary\") pod \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") " Nov 25 12:39:12 crc kubenswrapper[4706]: I1125 12:39:12.111799 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a3e38444-7907-4d48-bc07-b6b7dc4854a8-ca-certs\") pod \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\" (UID: \"a3e38444-7907-4d48-bc07-b6b7dc4854a8\") " Nov 25 12:39:12 crc kubenswrapper[4706]: I1125 12:39:12.112359 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3e38444-7907-4d48-bc07-b6b7dc4854a8-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "a3e38444-7907-4d48-bc07-b6b7dc4854a8" (UID: "a3e38444-7907-4d48-bc07-b6b7dc4854a8"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:39:12 crc kubenswrapper[4706]: I1125 12:39:12.112588 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3e38444-7907-4d48-bc07-b6b7dc4854a8-config-data" (OuterVolumeSpecName: "config-data") pod "a3e38444-7907-4d48-bc07-b6b7dc4854a8" (UID: "a3e38444-7907-4d48-bc07-b6b7dc4854a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:39:12 crc kubenswrapper[4706]: I1125 12:39:12.113520 4706 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a3e38444-7907-4d48-bc07-b6b7dc4854a8-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:39:12 crc kubenswrapper[4706]: I1125 12:39:12.113541 4706 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a3e38444-7907-4d48-bc07-b6b7dc4854a8-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 25 12:39:12 crc kubenswrapper[4706]: I1125 12:39:12.115192 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3e38444-7907-4d48-bc07-b6b7dc4854a8-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "a3e38444-7907-4d48-bc07-b6b7dc4854a8" (UID: "a3e38444-7907-4d48-bc07-b6b7dc4854a8"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:39:12 crc kubenswrapper[4706]: I1125 12:39:12.117360 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3e38444-7907-4d48-bc07-b6b7dc4854a8-kube-api-access-mbxr8" (OuterVolumeSpecName: "kube-api-access-mbxr8") pod "a3e38444-7907-4d48-bc07-b6b7dc4854a8" (UID: "a3e38444-7907-4d48-bc07-b6b7dc4854a8"). InnerVolumeSpecName "kube-api-access-mbxr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:39:12 crc kubenswrapper[4706]: I1125 12:39:12.117575 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "test-operator-logs") pod "a3e38444-7907-4d48-bc07-b6b7dc4854a8" (UID: "a3e38444-7907-4d48-bc07-b6b7dc4854a8"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 12:39:12 crc kubenswrapper[4706]: I1125 12:39:12.145160 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3e38444-7907-4d48-bc07-b6b7dc4854a8-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "a3e38444-7907-4d48-bc07-b6b7dc4854a8" (UID: "a3e38444-7907-4d48-bc07-b6b7dc4854a8"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:39:12 crc kubenswrapper[4706]: I1125 12:39:12.146409 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3e38444-7907-4d48-bc07-b6b7dc4854a8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a3e38444-7907-4d48-bc07-b6b7dc4854a8" (UID: "a3e38444-7907-4d48-bc07-b6b7dc4854a8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:39:12 crc kubenswrapper[4706]: I1125 12:39:12.147792 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3e38444-7907-4d48-bc07-b6b7dc4854a8-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "a3e38444-7907-4d48-bc07-b6b7dc4854a8" (UID: "a3e38444-7907-4d48-bc07-b6b7dc4854a8"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:39:12 crc kubenswrapper[4706]: I1125 12:39:12.162933 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3e38444-7907-4d48-bc07-b6b7dc4854a8-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "a3e38444-7907-4d48-bc07-b6b7dc4854a8" (UID: "a3e38444-7907-4d48-bc07-b6b7dc4854a8"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:39:12 crc kubenswrapper[4706]: I1125 12:39:12.218583 4706 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a3e38444-7907-4d48-bc07-b6b7dc4854a8-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 25 12:39:12 crc kubenswrapper[4706]: I1125 12:39:12.218608 4706 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3e38444-7907-4d48-bc07-b6b7dc4854a8-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 12:39:12 crc kubenswrapper[4706]: I1125 12:39:12.218618 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbxr8\" (UniqueName: \"kubernetes.io/projected/a3e38444-7907-4d48-bc07-b6b7dc4854a8-kube-api-access-mbxr8\") on node \"crc\" DevicePath \"\"" Nov 25 12:39:12 crc kubenswrapper[4706]: I1125 12:39:12.218630 4706 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a3e38444-7907-4d48-bc07-b6b7dc4854a8-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 25 12:39:12 crc kubenswrapper[4706]: I1125 12:39:12.218662 4706 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 25 12:39:12 crc kubenswrapper[4706]: I1125 12:39:12.218672 4706 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a3e38444-7907-4d48-bc07-b6b7dc4854a8-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:39:12 crc kubenswrapper[4706]: I1125 12:39:12.218682 4706 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a3e38444-7907-4d48-bc07-b6b7dc4854a8-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 25 12:39:12 crc kubenswrapper[4706]: I1125 12:39:12.238165 4706 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 25 12:39:12 crc kubenswrapper[4706]: I1125 12:39:12.321535 4706 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 25 12:39:12 crc kubenswrapper[4706]: I1125 12:39:12.584423 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a3e38444-7907-4d48-bc07-b6b7dc4854a8","Type":"ContainerDied","Data":"304e049e15451afb6e4e76e9ee3fb232009c9bc52de57c9ce026badf7b3ad4b0"} Nov 25 12:39:12 crc kubenswrapper[4706]: I1125 12:39:12.584824 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="304e049e15451afb6e4e76e9ee3fb232009c9bc52de57c9ce026badf7b3ad4b0" Nov 25 12:39:12 crc kubenswrapper[4706]: I1125 12:39:12.584561 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 25 12:39:15 crc kubenswrapper[4706]: I1125 12:39:15.923245 4706 scope.go:117] "RemoveContainer" containerID="f7d4f2bc57b2d7499bb910a36c7d647ec55fac45e9295616e11685165a93deff" Nov 25 12:39:15 crc kubenswrapper[4706]: E1125 12:39:15.924213 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:39:23 crc kubenswrapper[4706]: I1125 12:39:23.319429 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 25 12:39:23 crc kubenswrapper[4706]: E1125 12:39:23.320562 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7729b92-c527-4193-a53a-9e99e161ffc8" containerName="registry-server" Nov 25 12:39:23 crc kubenswrapper[4706]: I1125 12:39:23.320579 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7729b92-c527-4193-a53a-9e99e161ffc8" containerName="registry-server" Nov 25 12:39:23 crc kubenswrapper[4706]: E1125 12:39:23.320596 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7729b92-c527-4193-a53a-9e99e161ffc8" containerName="extract-content" Nov 25 12:39:23 crc kubenswrapper[4706]: I1125 12:39:23.320604 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7729b92-c527-4193-a53a-9e99e161ffc8" containerName="extract-content" Nov 25 12:39:23 crc kubenswrapper[4706]: E1125 12:39:23.320623 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3e38444-7907-4d48-bc07-b6b7dc4854a8" containerName="tempest-tests-tempest-tests-runner" Nov 25 12:39:23 crc kubenswrapper[4706]: I1125 12:39:23.320632 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3e38444-7907-4d48-bc07-b6b7dc4854a8" containerName="tempest-tests-tempest-tests-runner" Nov 25 12:39:23 crc kubenswrapper[4706]: E1125 12:39:23.320651 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7729b92-c527-4193-a53a-9e99e161ffc8" containerName="extract-utilities" Nov 25 12:39:23 crc kubenswrapper[4706]: I1125 12:39:23.320661 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7729b92-c527-4193-a53a-9e99e161ffc8" containerName="extract-utilities" Nov 25 12:39:23 crc kubenswrapper[4706]: I1125 12:39:23.320924 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3e38444-7907-4d48-bc07-b6b7dc4854a8" containerName="tempest-tests-tempest-tests-runner" Nov 25 12:39:23 crc kubenswrapper[4706]: I1125 12:39:23.320954 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7729b92-c527-4193-a53a-9e99e161ffc8" containerName="registry-server" Nov 25 12:39:23 crc kubenswrapper[4706]: I1125 12:39:23.321812 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 12:39:23 crc kubenswrapper[4706]: I1125 12:39:23.323901 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-rlp4g" Nov 25 12:39:23 crc kubenswrapper[4706]: I1125 12:39:23.331240 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 25 12:39:23 crc kubenswrapper[4706]: I1125 12:39:23.445082 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb58t\" (UniqueName: \"kubernetes.io/projected/586b9083-1af0-4687-886b-bdaf4041ba31-kube-api-access-sb58t\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"586b9083-1af0-4687-886b-bdaf4041ba31\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 12:39:23 crc kubenswrapper[4706]: I1125 12:39:23.445428 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"586b9083-1af0-4687-886b-bdaf4041ba31\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 12:39:23 crc kubenswrapper[4706]: I1125 12:39:23.547494 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb58t\" (UniqueName: \"kubernetes.io/projected/586b9083-1af0-4687-886b-bdaf4041ba31-kube-api-access-sb58t\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"586b9083-1af0-4687-886b-bdaf4041ba31\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 12:39:23 crc kubenswrapper[4706]: I1125 12:39:23.547832 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"586b9083-1af0-4687-886b-bdaf4041ba31\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 12:39:23 crc kubenswrapper[4706]: I1125 12:39:23.548384 4706 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"586b9083-1af0-4687-886b-bdaf4041ba31\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 12:39:23 crc kubenswrapper[4706]: I1125 12:39:23.571622 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb58t\" (UniqueName: \"kubernetes.io/projected/586b9083-1af0-4687-886b-bdaf4041ba31-kube-api-access-sb58t\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"586b9083-1af0-4687-886b-bdaf4041ba31\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 12:39:23 crc kubenswrapper[4706]: I1125 12:39:23.573421 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"586b9083-1af0-4687-886b-bdaf4041ba31\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 12:39:23 crc kubenswrapper[4706]: I1125 12:39:23.651944 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 12:39:24 crc kubenswrapper[4706]: I1125 12:39:24.109177 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 25 12:39:24 crc kubenswrapper[4706]: I1125 12:39:24.116603 4706 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 12:39:24 crc kubenswrapper[4706]: I1125 12:39:24.707490 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"586b9083-1af0-4687-886b-bdaf4041ba31","Type":"ContainerStarted","Data":"030d0b7163ef95b704aa82afb5b0f256bb83773b78165e6040e089f96f4b3082"} Nov 25 12:39:26 crc kubenswrapper[4706]: I1125 12:39:26.726624 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"586b9083-1af0-4687-886b-bdaf4041ba31","Type":"ContainerStarted","Data":"fa8b2d3e7daa5be102214ecf401d6fb2f299d1b60214d723f2f60fc17bb206c9"} Nov 25 12:39:26 crc kubenswrapper[4706]: I1125 12:39:26.746615 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.962237418 podStartE2EDuration="3.746597846s" podCreationTimestamp="2025-11-25 12:39:23 +0000 UTC" firstStartedPulling="2025-11-25 12:39:24.116380584 +0000 UTC m=+3773.030937965" lastFinishedPulling="2025-11-25 12:39:25.900741012 +0000 UTC m=+3774.815298393" observedRunningTime="2025-11-25 12:39:26.738967363 +0000 UTC m=+3775.653524754" watchObservedRunningTime="2025-11-25 12:39:26.746597846 +0000 UTC m=+3775.661155227" Nov 25 12:39:27 crc kubenswrapper[4706]: I1125 12:39:27.923834 4706 scope.go:117] "RemoveContainer" containerID="f7d4f2bc57b2d7499bb910a36c7d647ec55fac45e9295616e11685165a93deff" Nov 25 12:39:27 crc kubenswrapper[4706]: E1125 12:39:27.924705 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:39:38 crc kubenswrapper[4706]: I1125 12:39:38.922051 4706 scope.go:117] "RemoveContainer" containerID="f7d4f2bc57b2d7499bb910a36c7d647ec55fac45e9295616e11685165a93deff" Nov 25 12:39:38 crc kubenswrapper[4706]: E1125 12:39:38.923019 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:39:48 crc kubenswrapper[4706]: I1125 12:39:48.341292 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-z9k48/must-gather-rvs9t"] Nov 25 12:39:48 crc kubenswrapper[4706]: I1125 12:39:48.344082 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9k48/must-gather-rvs9t" Nov 25 12:39:48 crc kubenswrapper[4706]: I1125 12:39:48.348066 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-z9k48"/"openshift-service-ca.crt" Nov 25 12:39:48 crc kubenswrapper[4706]: I1125 12:39:48.352838 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-z9k48"/"kube-root-ca.crt" Nov 25 12:39:48 crc kubenswrapper[4706]: I1125 12:39:48.360789 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-z9k48/must-gather-rvs9t"] Nov 25 12:39:48 crc kubenswrapper[4706]: I1125 12:39:48.461237 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgqxc\" (UniqueName: \"kubernetes.io/projected/b5c81809-b0fb-48c6-b164-eef64ca8a7b1-kube-api-access-fgqxc\") pod \"must-gather-rvs9t\" (UID: \"b5c81809-b0fb-48c6-b164-eef64ca8a7b1\") " pod="openshift-must-gather-z9k48/must-gather-rvs9t" Nov 25 12:39:48 crc kubenswrapper[4706]: I1125 12:39:48.461442 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b5c81809-b0fb-48c6-b164-eef64ca8a7b1-must-gather-output\") pod \"must-gather-rvs9t\" (UID: \"b5c81809-b0fb-48c6-b164-eef64ca8a7b1\") " pod="openshift-must-gather-z9k48/must-gather-rvs9t" Nov 25 12:39:48 crc kubenswrapper[4706]: I1125 12:39:48.563250 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgqxc\" (UniqueName: \"kubernetes.io/projected/b5c81809-b0fb-48c6-b164-eef64ca8a7b1-kube-api-access-fgqxc\") pod \"must-gather-rvs9t\" (UID: \"b5c81809-b0fb-48c6-b164-eef64ca8a7b1\") " pod="openshift-must-gather-z9k48/must-gather-rvs9t" Nov 25 12:39:48 crc kubenswrapper[4706]: I1125 12:39:48.563385 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b5c81809-b0fb-48c6-b164-eef64ca8a7b1-must-gather-output\") pod \"must-gather-rvs9t\" (UID: \"b5c81809-b0fb-48c6-b164-eef64ca8a7b1\") " pod="openshift-must-gather-z9k48/must-gather-rvs9t" Nov 25 12:39:48 crc kubenswrapper[4706]: I1125 12:39:48.563895 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b5c81809-b0fb-48c6-b164-eef64ca8a7b1-must-gather-output\") pod \"must-gather-rvs9t\" (UID: \"b5c81809-b0fb-48c6-b164-eef64ca8a7b1\") " pod="openshift-must-gather-z9k48/must-gather-rvs9t" Nov 25 12:39:48 crc kubenswrapper[4706]: I1125 12:39:48.584446 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgqxc\" (UniqueName: \"kubernetes.io/projected/b5c81809-b0fb-48c6-b164-eef64ca8a7b1-kube-api-access-fgqxc\") pod \"must-gather-rvs9t\" (UID: \"b5c81809-b0fb-48c6-b164-eef64ca8a7b1\") " pod="openshift-must-gather-z9k48/must-gather-rvs9t" Nov 25 12:39:48 crc kubenswrapper[4706]: I1125 12:39:48.674283 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9k48/must-gather-rvs9t" Nov 25 12:39:49 crc kubenswrapper[4706]: I1125 12:39:49.159199 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-z9k48/must-gather-rvs9t"] Nov 25 12:39:49 crc kubenswrapper[4706]: I1125 12:39:49.946866 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9k48/must-gather-rvs9t" event={"ID":"b5c81809-b0fb-48c6-b164-eef64ca8a7b1","Type":"ContainerStarted","Data":"e2e8dab122a316bc6432345628e5dfd074a47decb76df9e6f27eb5624cf80ffb"} Nov 25 12:39:53 crc kubenswrapper[4706]: I1125 12:39:53.922667 4706 scope.go:117] "RemoveContainer" containerID="f7d4f2bc57b2d7499bb910a36c7d647ec55fac45e9295616e11685165a93deff" Nov 25 12:39:53 crc kubenswrapper[4706]: E1125 12:39:53.923512 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:39:54 crc kubenswrapper[4706]: I1125 12:39:54.001915 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9k48/must-gather-rvs9t" event={"ID":"b5c81809-b0fb-48c6-b164-eef64ca8a7b1","Type":"ContainerStarted","Data":"3df742aae4e36caeb7bde5876e3042c1fe842013760b2ebba2416c6122fa6096"} Nov 25 12:39:54 crc kubenswrapper[4706]: I1125 12:39:54.001969 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9k48/must-gather-rvs9t" event={"ID":"b5c81809-b0fb-48c6-b164-eef64ca8a7b1","Type":"ContainerStarted","Data":"a977f7e11abbfd54b6a17fddc36076506bd9c968961f6004264f3c30943cf7ab"} Nov 25 12:39:54 crc kubenswrapper[4706]: I1125 12:39:54.026338 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-z9k48/must-gather-rvs9t" podStartSLOduration=2.197383582 podStartE2EDuration="6.026319503s" podCreationTimestamp="2025-11-25 12:39:48 +0000 UTC" firstStartedPulling="2025-11-25 12:39:49.17448499 +0000 UTC m=+3798.089042361" lastFinishedPulling="2025-11-25 12:39:53.003420901 +0000 UTC m=+3801.917978282" observedRunningTime="2025-11-25 12:39:54.016440664 +0000 UTC m=+3802.930998045" watchObservedRunningTime="2025-11-25 12:39:54.026319503 +0000 UTC m=+3802.940876884" Nov 25 12:39:56 crc kubenswrapper[4706]: I1125 12:39:56.761004 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-z9k48/crc-debug-jwwl5"] Nov 25 12:39:56 crc kubenswrapper[4706]: I1125 12:39:56.762764 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9k48/crc-debug-jwwl5" Nov 25 12:39:56 crc kubenswrapper[4706]: I1125 12:39:56.766561 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-z9k48"/"default-dockercfg-hbb7b" Nov 25 12:39:56 crc kubenswrapper[4706]: I1125 12:39:56.846105 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d626873f-aa13-44fb-a288-f80078c8d62e-host\") pod \"crc-debug-jwwl5\" (UID: \"d626873f-aa13-44fb-a288-f80078c8d62e\") " pod="openshift-must-gather-z9k48/crc-debug-jwwl5" Nov 25 12:39:56 crc kubenswrapper[4706]: I1125 12:39:56.846326 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvcpr\" (UniqueName: \"kubernetes.io/projected/d626873f-aa13-44fb-a288-f80078c8d62e-kube-api-access-xvcpr\") pod \"crc-debug-jwwl5\" (UID: \"d626873f-aa13-44fb-a288-f80078c8d62e\") " pod="openshift-must-gather-z9k48/crc-debug-jwwl5" Nov 25 12:39:56 crc kubenswrapper[4706]: I1125 12:39:56.948544 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d626873f-aa13-44fb-a288-f80078c8d62e-host\") pod \"crc-debug-jwwl5\" (UID: \"d626873f-aa13-44fb-a288-f80078c8d62e\") " pod="openshift-must-gather-z9k48/crc-debug-jwwl5" Nov 25 12:39:56 crc kubenswrapper[4706]: I1125 12:39:56.948691 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d626873f-aa13-44fb-a288-f80078c8d62e-host\") pod \"crc-debug-jwwl5\" (UID: \"d626873f-aa13-44fb-a288-f80078c8d62e\") " pod="openshift-must-gather-z9k48/crc-debug-jwwl5" Nov 25 12:39:56 crc kubenswrapper[4706]: I1125 12:39:56.948743 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvcpr\" (UniqueName: \"kubernetes.io/projected/d626873f-aa13-44fb-a288-f80078c8d62e-kube-api-access-xvcpr\") pod \"crc-debug-jwwl5\" (UID: \"d626873f-aa13-44fb-a288-f80078c8d62e\") " pod="openshift-must-gather-z9k48/crc-debug-jwwl5" Nov 25 12:39:56 crc kubenswrapper[4706]: I1125 12:39:56.981402 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvcpr\" (UniqueName: \"kubernetes.io/projected/d626873f-aa13-44fb-a288-f80078c8d62e-kube-api-access-xvcpr\") pod \"crc-debug-jwwl5\" (UID: \"d626873f-aa13-44fb-a288-f80078c8d62e\") " pod="openshift-must-gather-z9k48/crc-debug-jwwl5" Nov 25 12:39:57 crc kubenswrapper[4706]: I1125 12:39:57.085208 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9k48/crc-debug-jwwl5" Nov 25 12:39:57 crc kubenswrapper[4706]: W1125 12:39:57.134113 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd626873f_aa13_44fb_a288_f80078c8d62e.slice/crio-55d0ec8878949a4ca1949d3d697d98d50ef1905bc9d56df5d5cd1fab718decc2 WatchSource:0}: Error finding container 55d0ec8878949a4ca1949d3d697d98d50ef1905bc9d56df5d5cd1fab718decc2: Status 404 returned error can't find the container with id 55d0ec8878949a4ca1949d3d697d98d50ef1905bc9d56df5d5cd1fab718decc2 Nov 25 12:39:58 crc kubenswrapper[4706]: I1125 12:39:58.044161 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9k48/crc-debug-jwwl5" event={"ID":"d626873f-aa13-44fb-a288-f80078c8d62e","Type":"ContainerStarted","Data":"55d0ec8878949a4ca1949d3d697d98d50ef1905bc9d56df5d5cd1fab718decc2"} Nov 25 12:40:06 crc kubenswrapper[4706]: I1125 12:40:06.922710 4706 scope.go:117] "RemoveContainer" containerID="f7d4f2bc57b2d7499bb910a36c7d647ec55fac45e9295616e11685165a93deff" Nov 25 12:40:06 crc kubenswrapper[4706]: E1125 12:40:06.923482 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:40:09 crc kubenswrapper[4706]: I1125 12:40:09.159655 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9k48/crc-debug-jwwl5" event={"ID":"d626873f-aa13-44fb-a288-f80078c8d62e","Type":"ContainerStarted","Data":"e16db9719d70d55aac82c7513a408c87758a76d65978ffc0aa189c0949c4c38e"} Nov 25 12:40:09 crc kubenswrapper[4706]: I1125 12:40:09.177793 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-z9k48/crc-debug-jwwl5" podStartSLOduration=1.560512831 podStartE2EDuration="13.17777388s" podCreationTimestamp="2025-11-25 12:39:56 +0000 UTC" firstStartedPulling="2025-11-25 12:39:57.141712244 +0000 UTC m=+3806.056269625" lastFinishedPulling="2025-11-25 12:40:08.758973283 +0000 UTC m=+3817.673530674" observedRunningTime="2025-11-25 12:40:09.172075236 +0000 UTC m=+3818.086632617" watchObservedRunningTime="2025-11-25 12:40:09.17777388 +0000 UTC m=+3818.092331261" Nov 25 12:40:20 crc kubenswrapper[4706]: I1125 12:40:20.922488 4706 scope.go:117] "RemoveContainer" containerID="f7d4f2bc57b2d7499bb910a36c7d647ec55fac45e9295616e11685165a93deff" Nov 25 12:40:20 crc kubenswrapper[4706]: E1125 12:40:20.924091 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:40:31 crc kubenswrapper[4706]: I1125 12:40:31.931683 4706 scope.go:117] "RemoveContainer" containerID="f7d4f2bc57b2d7499bb910a36c7d647ec55fac45e9295616e11685165a93deff" Nov 25 12:40:31 crc kubenswrapper[4706]: E1125 12:40:31.932526 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:40:44 crc kubenswrapper[4706]: I1125 12:40:44.922575 4706 scope.go:117] "RemoveContainer" containerID="f7d4f2bc57b2d7499bb910a36c7d647ec55fac45e9295616e11685165a93deff" Nov 25 12:40:44 crc kubenswrapper[4706]: E1125 12:40:44.923179 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:40:57 crc kubenswrapper[4706]: I1125 12:40:57.623147 4706 generic.go:334] "Generic (PLEG): container finished" podID="d626873f-aa13-44fb-a288-f80078c8d62e" containerID="e16db9719d70d55aac82c7513a408c87758a76d65978ffc0aa189c0949c4c38e" exitCode=0 Nov 25 12:40:57 crc kubenswrapper[4706]: I1125 12:40:57.623257 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9k48/crc-debug-jwwl5" event={"ID":"d626873f-aa13-44fb-a288-f80078c8d62e","Type":"ContainerDied","Data":"e16db9719d70d55aac82c7513a408c87758a76d65978ffc0aa189c0949c4c38e"} Nov 25 12:40:58 crc kubenswrapper[4706]: I1125 12:40:58.776410 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9k48/crc-debug-jwwl5" Nov 25 12:40:58 crc kubenswrapper[4706]: I1125 12:40:58.817883 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-z9k48/crc-debug-jwwl5"] Nov 25 12:40:58 crc kubenswrapper[4706]: I1125 12:40:58.832154 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-z9k48/crc-debug-jwwl5"] Nov 25 12:40:58 crc kubenswrapper[4706]: I1125 12:40:58.871545 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d626873f-aa13-44fb-a288-f80078c8d62e-host\") pod \"d626873f-aa13-44fb-a288-f80078c8d62e\" (UID: \"d626873f-aa13-44fb-a288-f80078c8d62e\") " Nov 25 12:40:58 crc kubenswrapper[4706]: I1125 12:40:58.871834 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvcpr\" (UniqueName: \"kubernetes.io/projected/d626873f-aa13-44fb-a288-f80078c8d62e-kube-api-access-xvcpr\") pod \"d626873f-aa13-44fb-a288-f80078c8d62e\" (UID: \"d626873f-aa13-44fb-a288-f80078c8d62e\") " Nov 25 12:40:58 crc kubenswrapper[4706]: I1125 12:40:58.871708 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d626873f-aa13-44fb-a288-f80078c8d62e-host" (OuterVolumeSpecName: "host") pod "d626873f-aa13-44fb-a288-f80078c8d62e" (UID: "d626873f-aa13-44fb-a288-f80078c8d62e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:40:58 crc kubenswrapper[4706]: I1125 12:40:58.872681 4706 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d626873f-aa13-44fb-a288-f80078c8d62e-host\") on node \"crc\" DevicePath \"\"" Nov 25 12:40:58 crc kubenswrapper[4706]: I1125 12:40:58.882516 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d626873f-aa13-44fb-a288-f80078c8d62e-kube-api-access-xvcpr" (OuterVolumeSpecName: "kube-api-access-xvcpr") pod "d626873f-aa13-44fb-a288-f80078c8d62e" (UID: "d626873f-aa13-44fb-a288-f80078c8d62e"). InnerVolumeSpecName "kube-api-access-xvcpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:40:58 crc kubenswrapper[4706]: I1125 12:40:58.974425 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvcpr\" (UniqueName: \"kubernetes.io/projected/d626873f-aa13-44fb-a288-f80078c8d62e-kube-api-access-xvcpr\") on node \"crc\" DevicePath \"\"" Nov 25 12:41:00 crc kubenswrapper[4706]: I1125 12:41:00.172327 4706 scope.go:117] "RemoveContainer" containerID="f7d4f2bc57b2d7499bb910a36c7d647ec55fac45e9295616e11685165a93deff" Nov 25 12:41:00 crc kubenswrapper[4706]: E1125 12:41:00.173037 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:41:00 crc kubenswrapper[4706]: I1125 12:41:00.187219 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9k48/crc-debug-jwwl5" Nov 25 12:41:00 crc kubenswrapper[4706]: I1125 12:41:00.187389 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d626873f-aa13-44fb-a288-f80078c8d62e" path="/var/lib/kubelet/pods/d626873f-aa13-44fb-a288-f80078c8d62e/volumes" Nov 25 12:41:00 crc kubenswrapper[4706]: I1125 12:41:00.188075 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-z9k48/crc-debug-4m9db"] Nov 25 12:41:00 crc kubenswrapper[4706]: E1125 12:41:00.188449 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d626873f-aa13-44fb-a288-f80078c8d62e" containerName="container-00" Nov 25 12:41:00 crc kubenswrapper[4706]: I1125 12:41:00.188469 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="d626873f-aa13-44fb-a288-f80078c8d62e" containerName="container-00" Nov 25 12:41:00 crc kubenswrapper[4706]: I1125 12:41:00.188708 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="d626873f-aa13-44fb-a288-f80078c8d62e" containerName="container-00" Nov 25 12:41:00 crc kubenswrapper[4706]: I1125 12:41:00.189079 4706 scope.go:117] "RemoveContainer" containerID="e16db9719d70d55aac82c7513a408c87758a76d65978ffc0aa189c0949c4c38e" Nov 25 12:41:00 crc kubenswrapper[4706]: I1125 12:41:00.189681 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9k48/crc-debug-4m9db" Nov 25 12:41:00 crc kubenswrapper[4706]: I1125 12:41:00.191520 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-z9k48"/"default-dockercfg-hbb7b" Nov 25 12:41:00 crc kubenswrapper[4706]: I1125 12:41:00.350804 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c81449cc-5ba7-40b3-947c-62acece2e924-host\") pod \"crc-debug-4m9db\" (UID: \"c81449cc-5ba7-40b3-947c-62acece2e924\") " pod="openshift-must-gather-z9k48/crc-debug-4m9db" Nov 25 12:41:00 crc kubenswrapper[4706]: I1125 12:41:00.351123 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kshbn\" (UniqueName: \"kubernetes.io/projected/c81449cc-5ba7-40b3-947c-62acece2e924-kube-api-access-kshbn\") pod \"crc-debug-4m9db\" (UID: \"c81449cc-5ba7-40b3-947c-62acece2e924\") " pod="openshift-must-gather-z9k48/crc-debug-4m9db" Nov 25 12:41:00 crc kubenswrapper[4706]: I1125 12:41:00.452596 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c81449cc-5ba7-40b3-947c-62acece2e924-host\") pod \"crc-debug-4m9db\" (UID: \"c81449cc-5ba7-40b3-947c-62acece2e924\") " pod="openshift-must-gather-z9k48/crc-debug-4m9db" Nov 25 12:41:00 crc kubenswrapper[4706]: I1125 12:41:00.452671 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kshbn\" (UniqueName: \"kubernetes.io/projected/c81449cc-5ba7-40b3-947c-62acece2e924-kube-api-access-kshbn\") pod \"crc-debug-4m9db\" (UID: \"c81449cc-5ba7-40b3-947c-62acece2e924\") " pod="openshift-must-gather-z9k48/crc-debug-4m9db" Nov 25 12:41:00 crc kubenswrapper[4706]: I1125 12:41:00.452879 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c81449cc-5ba7-40b3-947c-62acece2e924-host\") pod \"crc-debug-4m9db\" (UID: \"c81449cc-5ba7-40b3-947c-62acece2e924\") " pod="openshift-must-gather-z9k48/crc-debug-4m9db" Nov 25 12:41:00 crc kubenswrapper[4706]: I1125 12:41:00.471046 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kshbn\" (UniqueName: \"kubernetes.io/projected/c81449cc-5ba7-40b3-947c-62acece2e924-kube-api-access-kshbn\") pod \"crc-debug-4m9db\" (UID: \"c81449cc-5ba7-40b3-947c-62acece2e924\") " pod="openshift-must-gather-z9k48/crc-debug-4m9db" Nov 25 12:41:00 crc kubenswrapper[4706]: I1125 12:41:00.558851 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9k48/crc-debug-4m9db" Nov 25 12:41:01 crc kubenswrapper[4706]: I1125 12:41:01.198575 4706 generic.go:334] "Generic (PLEG): container finished" podID="c81449cc-5ba7-40b3-947c-62acece2e924" containerID="734ec4d142fea5e55c7b64e5812f8991bcc3a47bd2bbe427f675fa3e2615fa68" exitCode=0 Nov 25 12:41:01 crc kubenswrapper[4706]: I1125 12:41:01.198648 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9k48/crc-debug-4m9db" event={"ID":"c81449cc-5ba7-40b3-947c-62acece2e924","Type":"ContainerDied","Data":"734ec4d142fea5e55c7b64e5812f8991bcc3a47bd2bbe427f675fa3e2615fa68"} Nov 25 12:41:01 crc kubenswrapper[4706]: I1125 12:41:01.198980 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9k48/crc-debug-4m9db" event={"ID":"c81449cc-5ba7-40b3-947c-62acece2e924","Type":"ContainerStarted","Data":"96d19e6b77ba4ac344b3c8e7cc3a79904ba0c7128f7e36a0bc3a287d487f3df8"} Nov 25 12:41:01 crc kubenswrapper[4706]: I1125 12:41:01.744522 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-z9k48/crc-debug-4m9db"] Nov 25 12:41:01 crc kubenswrapper[4706]: I1125 12:41:01.753201 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-z9k48/crc-debug-4m9db"] Nov 25 12:41:02 crc kubenswrapper[4706]: I1125 12:41:02.324559 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9k48/crc-debug-4m9db" Nov 25 12:41:02 crc kubenswrapper[4706]: I1125 12:41:02.490223 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c81449cc-5ba7-40b3-947c-62acece2e924-host\") pod \"c81449cc-5ba7-40b3-947c-62acece2e924\" (UID: \"c81449cc-5ba7-40b3-947c-62acece2e924\") " Nov 25 12:41:02 crc kubenswrapper[4706]: I1125 12:41:02.490442 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c81449cc-5ba7-40b3-947c-62acece2e924-host" (OuterVolumeSpecName: "host") pod "c81449cc-5ba7-40b3-947c-62acece2e924" (UID: "c81449cc-5ba7-40b3-947c-62acece2e924"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:41:02 crc kubenswrapper[4706]: I1125 12:41:02.490487 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kshbn\" (UniqueName: \"kubernetes.io/projected/c81449cc-5ba7-40b3-947c-62acece2e924-kube-api-access-kshbn\") pod \"c81449cc-5ba7-40b3-947c-62acece2e924\" (UID: \"c81449cc-5ba7-40b3-947c-62acece2e924\") " Nov 25 12:41:02 crc kubenswrapper[4706]: I1125 12:41:02.490988 4706 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c81449cc-5ba7-40b3-947c-62acece2e924-host\") on node \"crc\" DevicePath \"\"" Nov 25 12:41:02 crc kubenswrapper[4706]: I1125 12:41:02.498723 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c81449cc-5ba7-40b3-947c-62acece2e924-kube-api-access-kshbn" (OuterVolumeSpecName: "kube-api-access-kshbn") pod "c81449cc-5ba7-40b3-947c-62acece2e924" (UID: "c81449cc-5ba7-40b3-947c-62acece2e924"). InnerVolumeSpecName "kube-api-access-kshbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:41:02 crc kubenswrapper[4706]: I1125 12:41:02.593023 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kshbn\" (UniqueName: \"kubernetes.io/projected/c81449cc-5ba7-40b3-947c-62acece2e924-kube-api-access-kshbn\") on node \"crc\" DevicePath \"\"" Nov 25 12:41:02 crc kubenswrapper[4706]: I1125 12:41:02.942511 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-z9k48/crc-debug-ldvgj"] Nov 25 12:41:02 crc kubenswrapper[4706]: E1125 12:41:02.943275 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c81449cc-5ba7-40b3-947c-62acece2e924" containerName="container-00" Nov 25 12:41:02 crc kubenswrapper[4706]: I1125 12:41:02.943292 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="c81449cc-5ba7-40b3-947c-62acece2e924" containerName="container-00" Nov 25 12:41:02 crc kubenswrapper[4706]: I1125 12:41:02.943546 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="c81449cc-5ba7-40b3-947c-62acece2e924" containerName="container-00" Nov 25 12:41:02 crc kubenswrapper[4706]: I1125 12:41:02.944205 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9k48/crc-debug-ldvgj" Nov 25 12:41:03 crc kubenswrapper[4706]: I1125 12:41:03.105237 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6d066d0e-0894-40a4-94df-d503e2b2cbf2-host\") pod \"crc-debug-ldvgj\" (UID: \"6d066d0e-0894-40a4-94df-d503e2b2cbf2\") " pod="openshift-must-gather-z9k48/crc-debug-ldvgj" Nov 25 12:41:03 crc kubenswrapper[4706]: I1125 12:41:03.105607 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z2fq\" (UniqueName: \"kubernetes.io/projected/6d066d0e-0894-40a4-94df-d503e2b2cbf2-kube-api-access-4z2fq\") pod \"crc-debug-ldvgj\" (UID: \"6d066d0e-0894-40a4-94df-d503e2b2cbf2\") " pod="openshift-must-gather-z9k48/crc-debug-ldvgj" Nov 25 12:41:03 crc kubenswrapper[4706]: I1125 12:41:03.207093 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z2fq\" (UniqueName: \"kubernetes.io/projected/6d066d0e-0894-40a4-94df-d503e2b2cbf2-kube-api-access-4z2fq\") pod \"crc-debug-ldvgj\" (UID: \"6d066d0e-0894-40a4-94df-d503e2b2cbf2\") " pod="openshift-must-gather-z9k48/crc-debug-ldvgj" Nov 25 12:41:03 crc kubenswrapper[4706]: I1125 12:41:03.207225 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6d066d0e-0894-40a4-94df-d503e2b2cbf2-host\") pod \"crc-debug-ldvgj\" (UID: \"6d066d0e-0894-40a4-94df-d503e2b2cbf2\") " pod="openshift-must-gather-z9k48/crc-debug-ldvgj" Nov 25 12:41:03 crc kubenswrapper[4706]: I1125 12:41:03.207470 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6d066d0e-0894-40a4-94df-d503e2b2cbf2-host\") pod \"crc-debug-ldvgj\" (UID: \"6d066d0e-0894-40a4-94df-d503e2b2cbf2\") " pod="openshift-must-gather-z9k48/crc-debug-ldvgj" Nov 25 12:41:03 crc kubenswrapper[4706]: I1125 12:41:03.223954 4706 scope.go:117] "RemoveContainer" containerID="734ec4d142fea5e55c7b64e5812f8991bcc3a47bd2bbe427f675fa3e2615fa68" Nov 25 12:41:03 crc kubenswrapper[4706]: I1125 12:41:03.223979 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9k48/crc-debug-4m9db" Nov 25 12:41:03 crc kubenswrapper[4706]: I1125 12:41:03.230977 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z2fq\" (UniqueName: \"kubernetes.io/projected/6d066d0e-0894-40a4-94df-d503e2b2cbf2-kube-api-access-4z2fq\") pod \"crc-debug-ldvgj\" (UID: \"6d066d0e-0894-40a4-94df-d503e2b2cbf2\") " pod="openshift-must-gather-z9k48/crc-debug-ldvgj" Nov 25 12:41:03 crc kubenswrapper[4706]: I1125 12:41:03.272067 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9k48/crc-debug-ldvgj" Nov 25 12:41:03 crc kubenswrapper[4706]: I1125 12:41:03.934601 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c81449cc-5ba7-40b3-947c-62acece2e924" path="/var/lib/kubelet/pods/c81449cc-5ba7-40b3-947c-62acece2e924/volumes" Nov 25 12:41:04 crc kubenswrapper[4706]: I1125 12:41:04.237475 4706 generic.go:334] "Generic (PLEG): container finished" podID="6d066d0e-0894-40a4-94df-d503e2b2cbf2" containerID="740288fa829a6495ab46ebeaebf6d8c0556e1beaa3970253f51e844c738e610c" exitCode=0 Nov 25 12:41:04 crc kubenswrapper[4706]: I1125 12:41:04.237554 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9k48/crc-debug-ldvgj" event={"ID":"6d066d0e-0894-40a4-94df-d503e2b2cbf2","Type":"ContainerDied","Data":"740288fa829a6495ab46ebeaebf6d8c0556e1beaa3970253f51e844c738e610c"} Nov 25 12:41:04 crc kubenswrapper[4706]: I1125 12:41:04.237859 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9k48/crc-debug-ldvgj" event={"ID":"6d066d0e-0894-40a4-94df-d503e2b2cbf2","Type":"ContainerStarted","Data":"efae652837c6604bd897f05f4fc551b25a4adce8926a49020dfa04fea2a2b460"} Nov 25 12:41:04 crc kubenswrapper[4706]: I1125 12:41:04.277361 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-z9k48/crc-debug-ldvgj"] Nov 25 12:41:04 crc kubenswrapper[4706]: I1125 12:41:04.287116 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-z9k48/crc-debug-ldvgj"] Nov 25 12:41:05 crc kubenswrapper[4706]: I1125 12:41:05.372432 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9k48/crc-debug-ldvgj" Nov 25 12:41:05 crc kubenswrapper[4706]: I1125 12:41:05.462741 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4z2fq\" (UniqueName: \"kubernetes.io/projected/6d066d0e-0894-40a4-94df-d503e2b2cbf2-kube-api-access-4z2fq\") pod \"6d066d0e-0894-40a4-94df-d503e2b2cbf2\" (UID: \"6d066d0e-0894-40a4-94df-d503e2b2cbf2\") " Nov 25 12:41:05 crc kubenswrapper[4706]: I1125 12:41:05.463044 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6d066d0e-0894-40a4-94df-d503e2b2cbf2-host\") pod \"6d066d0e-0894-40a4-94df-d503e2b2cbf2\" (UID: \"6d066d0e-0894-40a4-94df-d503e2b2cbf2\") " Nov 25 12:41:05 crc kubenswrapper[4706]: I1125 12:41:05.463254 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d066d0e-0894-40a4-94df-d503e2b2cbf2-host" (OuterVolumeSpecName: "host") pod "6d066d0e-0894-40a4-94df-d503e2b2cbf2" (UID: "6d066d0e-0894-40a4-94df-d503e2b2cbf2"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:41:05 crc kubenswrapper[4706]: I1125 12:41:05.463800 4706 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6d066d0e-0894-40a4-94df-d503e2b2cbf2-host\") on node \"crc\" DevicePath \"\"" Nov 25 12:41:05 crc kubenswrapper[4706]: I1125 12:41:05.469695 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d066d0e-0894-40a4-94df-d503e2b2cbf2-kube-api-access-4z2fq" (OuterVolumeSpecName: "kube-api-access-4z2fq") pod "6d066d0e-0894-40a4-94df-d503e2b2cbf2" (UID: "6d066d0e-0894-40a4-94df-d503e2b2cbf2"). InnerVolumeSpecName "kube-api-access-4z2fq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:41:05 crc kubenswrapper[4706]: I1125 12:41:05.567087 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4z2fq\" (UniqueName: \"kubernetes.io/projected/6d066d0e-0894-40a4-94df-d503e2b2cbf2-kube-api-access-4z2fq\") on node \"crc\" DevicePath \"\"" Nov 25 12:41:05 crc kubenswrapper[4706]: I1125 12:41:05.947174 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d066d0e-0894-40a4-94df-d503e2b2cbf2" path="/var/lib/kubelet/pods/6d066d0e-0894-40a4-94df-d503e2b2cbf2/volumes" Nov 25 12:41:06 crc kubenswrapper[4706]: I1125 12:41:06.259313 4706 scope.go:117] "RemoveContainer" containerID="740288fa829a6495ab46ebeaebf6d8c0556e1beaa3970253f51e844c738e610c" Nov 25 12:41:06 crc kubenswrapper[4706]: I1125 12:41:06.259331 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9k48/crc-debug-ldvgj" Nov 25 12:41:11 crc kubenswrapper[4706]: I1125 12:41:11.932790 4706 scope.go:117] "RemoveContainer" containerID="f7d4f2bc57b2d7499bb910a36c7d647ec55fac45e9295616e11685165a93deff" Nov 25 12:41:11 crc kubenswrapper[4706]: E1125 12:41:11.933705 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:41:21 crc kubenswrapper[4706]: I1125 12:41:21.697737 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-85c7db76fd-f64jq_500c37cc-45dd-444f-a630-19356ac8d1e3/barbican-api/0.log" Nov 25 12:41:21 crc kubenswrapper[4706]: I1125 12:41:21.904711 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6c9c496566-jrgpl_2ea4caef-6e53-42ac-9202-cf4b05a28041/barbican-keystone-listener/0.log" Nov 25 12:41:21 crc kubenswrapper[4706]: I1125 12:41:21.914819 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-85c7db76fd-f64jq_500c37cc-45dd-444f-a630-19356ac8d1e3/barbican-api-log/0.log" Nov 25 12:41:21 crc kubenswrapper[4706]: I1125 12:41:21.930119 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6c9c496566-jrgpl_2ea4caef-6e53-42ac-9202-cf4b05a28041/barbican-keystone-listener-log/0.log" Nov 25 12:41:22 crc kubenswrapper[4706]: I1125 12:41:22.115826 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7fc64dc5d7-m6cqm_ac9c3625-3935-48b4-abf3-a8330d99152d/barbican-worker/0.log" Nov 25 12:41:22 crc kubenswrapper[4706]: I1125 12:41:22.121983 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7fc64dc5d7-m6cqm_ac9c3625-3935-48b4-abf3-a8330d99152d/barbican-worker-log/0.log" Nov 25 12:41:22 crc kubenswrapper[4706]: I1125 12:41:22.406374 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_340a9043-f74e-40cb-aeea-bbcabe4d865f/ceilometer-central-agent/0.log" Nov 25 12:41:22 crc kubenswrapper[4706]: I1125 12:41:22.407105 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r_50dff0a2-b50d-43ee-8951-e49958b3cd5a/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 12:41:22 crc kubenswrapper[4706]: I1125 12:41:22.466197 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_340a9043-f74e-40cb-aeea-bbcabe4d865f/ceilometer-notification-agent/0.log" Nov 25 12:41:22 crc kubenswrapper[4706]: I1125 12:41:22.587570 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_340a9043-f74e-40cb-aeea-bbcabe4d865f/proxy-httpd/0.log" Nov 25 12:41:22 crc kubenswrapper[4706]: I1125 12:41:22.606751 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_340a9043-f74e-40cb-aeea-bbcabe4d865f/sg-core/0.log" Nov 25 12:41:22 crc kubenswrapper[4706]: I1125 12:41:22.762143 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3f35fbd6-a7c7-4d44-af30-601512a5dfa4/cinder-api/0.log" Nov 25 12:41:22 crc kubenswrapper[4706]: I1125 12:41:22.905818 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_f4dd78e0-575d-4188-b6f5-17ab8a12383c/cinder-scheduler/0.log" Nov 25 12:41:22 crc kubenswrapper[4706]: I1125 12:41:22.922624 4706 scope.go:117] "RemoveContainer" containerID="f7d4f2bc57b2d7499bb910a36c7d647ec55fac45e9295616e11685165a93deff" Nov 25 12:41:22 crc kubenswrapper[4706]: E1125 12:41:22.922875 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:41:22 crc kubenswrapper[4706]: I1125 12:41:22.947650 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3f35fbd6-a7c7-4d44-af30-601512a5dfa4/cinder-api-log/0.log" Nov 25 12:41:23 crc kubenswrapper[4706]: I1125 12:41:23.057740 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_f4dd78e0-575d-4188-b6f5-17ab8a12383c/probe/0.log" Nov 25 12:41:23 crc kubenswrapper[4706]: I1125 12:41:23.141620 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-wtp98_81138548-0b1d-43b6-af7c-fdf31598a28d/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 12:41:23 crc kubenswrapper[4706]: I1125 12:41:23.289133 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-h4crd_04cc6fd1-5a4f-4d7d-aed4-849709bb005d/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 12:41:23 crc kubenswrapper[4706]: I1125 12:41:23.367063 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-777cf_3ab6dcdf-bba1-4c4c-aa91-47a06fd22366/init/0.log" Nov 25 12:41:23 crc kubenswrapper[4706]: I1125 12:41:23.544479 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-777cf_3ab6dcdf-bba1-4c4c-aa91-47a06fd22366/dnsmasq-dns/0.log" Nov 25 12:41:23 crc kubenswrapper[4706]: I1125 12:41:23.567635 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-777cf_3ab6dcdf-bba1-4c4c-aa91-47a06fd22366/init/0.log" Nov 25 12:41:23 crc kubenswrapper[4706]: I1125 12:41:23.576465 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-9hvc8_c905bf42-3156-4c1f-8f93-4ab4c0141fdd/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 12:41:23 crc kubenswrapper[4706]: I1125 12:41:23.752938 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d0c5bfae-397f-432d-bdb6-8bb27d43f68c/glance-httpd/0.log" Nov 25 12:41:23 crc kubenswrapper[4706]: I1125 12:41:23.802594 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d0c5bfae-397f-432d-bdb6-8bb27d43f68c/glance-log/0.log" Nov 25 12:41:23 crc kubenswrapper[4706]: I1125 12:41:23.945849 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_56ae92e0-a5ff-4b66-b471-6e38781e51da/glance-log/0.log" Nov 25 12:41:23 crc kubenswrapper[4706]: I1125 12:41:23.978864 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_56ae92e0-a5ff-4b66-b471-6e38781e51da/glance-httpd/0.log" Nov 25 12:41:24 crc kubenswrapper[4706]: I1125 12:41:24.207660 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-85664bf4f6-ws67w_66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5/horizon/0.log" Nov 25 12:41:24 crc kubenswrapper[4706]: I1125 12:41:24.328463 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-595gj_baaa73b2-135d-4ce5-8e1a-4c7ffde4e639/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 12:41:24 crc kubenswrapper[4706]: I1125 12:41:24.599481 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-85664bf4f6-ws67w_66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5/horizon-log/0.log" Nov 25 12:41:24 crc kubenswrapper[4706]: I1125 12:41:24.661159 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-zlncj_5f5a244b-95ce-4443-9951-780763117499/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 12:41:24 crc kubenswrapper[4706]: I1125 12:41:24.833003 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29401201-6qr5x_6e578ce4-062a-47d6-ad7e-c1e36d257077/keystone-cron/0.log" Nov 25 12:41:25 crc kubenswrapper[4706]: I1125 12:41:25.011065 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_04e7a5d0-b5fe-4a58-b015-339cc1218c6e/kube-state-metrics/3.log" Nov 25 12:41:25 crc kubenswrapper[4706]: I1125 12:41:25.067905 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_04e7a5d0-b5fe-4a58-b015-339cc1218c6e/kube-state-metrics/2.log" Nov 25 12:41:25 crc kubenswrapper[4706]: I1125 12:41:25.105734 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-854bff779d-k8bjv_df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e/keystone-api/0.log" Nov 25 12:41:25 crc kubenswrapper[4706]: I1125 12:41:25.273497 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7_90e48cbb-dd1b-466b-a72f-5e2913554a5b/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 12:41:25 crc kubenswrapper[4706]: I1125 12:41:25.653286 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7964f7f8cc-7zjzw_b108b69d-0dd8-4945-aa38-c2caee99bac1/neutron-httpd/0.log" Nov 25 12:41:25 crc kubenswrapper[4706]: I1125 12:41:25.668617 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk_5686661c-4510-41ab-aed3-7ab5fa576b60/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 12:41:25 crc kubenswrapper[4706]: I1125 12:41:25.699491 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7964f7f8cc-7zjzw_b108b69d-0dd8-4945-aa38-c2caee99bac1/neutron-api/0.log" Nov 25 12:41:26 crc kubenswrapper[4706]: I1125 12:41:26.313389 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_0608285b-d97c-42b6-abc5-32cff6509d9e/nova-api-log/0.log" Nov 25 12:41:26 crc kubenswrapper[4706]: I1125 12:41:26.386459 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_f550fc56-7c91-4ca6-b10e-6394166b34c8/nova-cell0-conductor-conductor/0.log" Nov 25 12:41:26 crc kubenswrapper[4706]: I1125 12:41:26.612246 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_0608285b-d97c-42b6-abc5-32cff6509d9e/nova-api-api/0.log" Nov 25 12:41:26 crc kubenswrapper[4706]: I1125 12:41:26.706709 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_125dfab1-ad73-40ed-bd12-3e061e6b0ec2/nova-cell1-conductor-conductor/0.log" Nov 25 12:41:27 crc kubenswrapper[4706]: I1125 12:41:27.035961 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-67xt7_f74a1106-ae1e-464c-a761-dc47c54c361c/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 12:41:27 crc kubenswrapper[4706]: I1125 12:41:27.052941 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_562e456e-a719-47cb-b220-06ccb6fc06cc/nova-cell1-novncproxy-novncproxy/0.log" Nov 25 12:41:27 crc kubenswrapper[4706]: I1125 12:41:27.220184 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_4169a8fb-29dd-4d0a-851f-58055dcfff18/nova-metadata-log/0.log" Nov 25 12:41:27 crc kubenswrapper[4706]: I1125 12:41:27.419432 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_dea70033-299d-4ca8-9249-c909449f24c9/nova-scheduler-scheduler/0.log" Nov 25 12:41:27 crc kubenswrapper[4706]: I1125 12:41:27.547446 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_49e77cd2-5940-4ae6-9418-d069ce012ad7/mysql-bootstrap/0.log" Nov 25 12:41:27 crc kubenswrapper[4706]: I1125 12:41:27.795130 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_49e77cd2-5940-4ae6-9418-d069ce012ad7/mysql-bootstrap/0.log" Nov 25 12:41:27 crc kubenswrapper[4706]: I1125 12:41:27.943755 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_49e77cd2-5940-4ae6-9418-d069ce012ad7/galera/0.log" Nov 25 12:41:28 crc kubenswrapper[4706]: I1125 12:41:28.098256 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_64ca6766-8491-40bc-a14e-eb866edf3fe8/mysql-bootstrap/0.log" Nov 25 12:41:28 crc kubenswrapper[4706]: I1125 12:41:28.277280 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_64ca6766-8491-40bc-a14e-eb866edf3fe8/galera/0.log" Nov 25 12:41:28 crc kubenswrapper[4706]: I1125 12:41:28.314349 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_64ca6766-8491-40bc-a14e-eb866edf3fe8/mysql-bootstrap/0.log" Nov 25 12:41:28 crc kubenswrapper[4706]: I1125 12:41:28.647945 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_4169a8fb-29dd-4d0a-851f-58055dcfff18/nova-metadata-metadata/0.log" Nov 25 12:41:28 crc kubenswrapper[4706]: I1125 12:41:28.657757 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_b8a85f10-0dcd-42f8-a4bc-f0b25f59cfe8/openstackclient/0.log" Nov 25 12:41:28 crc kubenswrapper[4706]: I1125 12:41:28.749727 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-kd65v_23b72526-ef77-4128-a880-6df46f5db440/ovn-controller/0.log" Nov 25 12:41:29 crc kubenswrapper[4706]: I1125 12:41:29.172418 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-9sjfp_39f1459f-1764-4a48-8363-b32ac9350cdb/openstack-network-exporter/0.log" Nov 25 12:41:29 crc kubenswrapper[4706]: I1125 12:41:29.247263 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-q8rmg_a2035192-0066-4761-b5a8-2684c95f20ff/ovsdb-server-init/0.log" Nov 25 12:41:29 crc kubenswrapper[4706]: I1125 12:41:29.530178 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-q8rmg_a2035192-0066-4761-b5a8-2684c95f20ff/ovs-vswitchd/0.log" Nov 25 12:41:29 crc kubenswrapper[4706]: I1125 12:41:29.552501 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-q8rmg_a2035192-0066-4761-b5a8-2684c95f20ff/ovsdb-server/0.log" Nov 25 12:41:29 crc kubenswrapper[4706]: I1125 12:41:29.598275 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-q8rmg_a2035192-0066-4761-b5a8-2684c95f20ff/ovsdb-server-init/0.log" Nov 25 12:41:29 crc kubenswrapper[4706]: I1125 12:41:29.792184 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-6kxnq_97dd7a8b-3605-49a2-ad4d-72dd946605aa/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 12:41:29 crc kubenswrapper[4706]: I1125 12:41:29.837432 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_655006b1-956d-49e9-b15f-c00cd945c024/openstack-network-exporter/0.log" Nov 25 12:41:29 crc kubenswrapper[4706]: I1125 12:41:29.840795 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_655006b1-956d-49e9-b15f-c00cd945c024/ovn-northd/0.log" Nov 25 12:41:30 crc kubenswrapper[4706]: I1125 12:41:30.075853 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3c49be9b-0e12-4db2-82be-3415441f57d4/openstack-network-exporter/0.log" Nov 25 12:41:30 crc kubenswrapper[4706]: I1125 12:41:30.117272 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3c49be9b-0e12-4db2-82be-3415441f57d4/ovsdbserver-nb/0.log" Nov 25 12:41:30 crc kubenswrapper[4706]: I1125 12:41:30.330085 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_752cf7db-684f-4a5a-8a03-717e69810056/openstack-network-exporter/0.log" Nov 25 12:41:30 crc kubenswrapper[4706]: I1125 12:41:30.334382 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_752cf7db-684f-4a5a-8a03-717e69810056/ovsdbserver-sb/0.log" Nov 25 12:41:30 crc kubenswrapper[4706]: I1125 12:41:30.579257 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5bfcb97b8-lmwjc_2dab0780-5792-4f20-9553-a780aa94ebba/placement-api/0.log" Nov 25 12:41:30 crc kubenswrapper[4706]: I1125 12:41:30.695710 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6ea2e87f-dc81-49cc-81a8-e08a8ed11f12/setup-container/0.log" Nov 25 12:41:30 crc kubenswrapper[4706]: I1125 12:41:30.776906 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5bfcb97b8-lmwjc_2dab0780-5792-4f20-9553-a780aa94ebba/placement-log/0.log" Nov 25 12:41:31 crc kubenswrapper[4706]: I1125 12:41:31.076395 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6ea2e87f-dc81-49cc-81a8-e08a8ed11f12/rabbitmq/0.log" Nov 25 12:41:31 crc kubenswrapper[4706]: I1125 12:41:31.118748 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6ea2e87f-dc81-49cc-81a8-e08a8ed11f12/setup-container/0.log" Nov 25 12:41:31 crc kubenswrapper[4706]: I1125 12:41:31.140213 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a9a6207a-78de-492d-8c88-9a1d2a6f703d/setup-container/0.log" Nov 25 12:41:31 crc kubenswrapper[4706]: I1125 12:41:31.396145 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a9a6207a-78de-492d-8c88-9a1d2a6f703d/setup-container/0.log" Nov 25 12:41:31 crc kubenswrapper[4706]: I1125 12:41:31.426738 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a9a6207a-78de-492d-8c88-9a1d2a6f703d/rabbitmq/0.log" Nov 25 12:41:31 crc kubenswrapper[4706]: I1125 12:41:31.541205 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-29gdm_9357f592-809a-450b-b052-fbb438c6d98f/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 12:41:31 crc kubenswrapper[4706]: I1125 12:41:31.923663 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-qn78f_b86d7293-ea09-42c5-948d-27c51a31d886/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 12:41:32 crc kubenswrapper[4706]: I1125 12:41:32.014037 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw_e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 12:41:32 crc kubenswrapper[4706]: I1125 12:41:32.224740 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-4j6mw_2976f69c-c134-429f-98c4-f7d54d9245b1/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 12:41:32 crc kubenswrapper[4706]: I1125 12:41:32.440355 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-d2qht_ab590c42-c26e-49b8-8fd1-e1c535dd7e8c/ssh-known-hosts-edpm-deployment/0.log" Nov 25 12:41:32 crc kubenswrapper[4706]: I1125 12:41:32.678133 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-65d9589979-xw964_64d9e8db-d554-4623-9a76-719df27fffef/proxy-server/0.log" Nov 25 12:41:32 crc kubenswrapper[4706]: I1125 12:41:32.714095 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-65d9589979-xw964_64d9e8db-d554-4623-9a76-719df27fffef/proxy-httpd/0.log" Nov 25 12:41:32 crc kubenswrapper[4706]: I1125 12:41:32.843146 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-ww65d_687ee889-8ec7-4754-b45f-b0f087368a37/swift-ring-rebalance/0.log" Nov 25 12:41:32 crc kubenswrapper[4706]: I1125 12:41:32.941817 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9225b01e-1067-47de-812a-d9be36adf9d0/account-auditor/0.log" Nov 25 12:41:32 crc kubenswrapper[4706]: I1125 12:41:32.994870 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9225b01e-1067-47de-812a-d9be36adf9d0/account-reaper/0.log" Nov 25 12:41:33 crc kubenswrapper[4706]: I1125 12:41:33.097633 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9225b01e-1067-47de-812a-d9be36adf9d0/account-replicator/0.log" Nov 25 12:41:33 crc kubenswrapper[4706]: I1125 12:41:33.185840 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9225b01e-1067-47de-812a-d9be36adf9d0/container-auditor/0.log" Nov 25 12:41:33 crc kubenswrapper[4706]: I1125 12:41:33.214776 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9225b01e-1067-47de-812a-d9be36adf9d0/account-server/0.log" Nov 25 12:41:33 crc kubenswrapper[4706]: I1125 12:41:33.336803 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9225b01e-1067-47de-812a-d9be36adf9d0/container-replicator/0.log" Nov 25 12:41:33 crc kubenswrapper[4706]: I1125 12:41:33.357909 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9225b01e-1067-47de-812a-d9be36adf9d0/container-server/0.log" Nov 25 12:41:33 crc kubenswrapper[4706]: I1125 12:41:33.474122 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9225b01e-1067-47de-812a-d9be36adf9d0/object-auditor/0.log" Nov 25 12:41:33 crc kubenswrapper[4706]: I1125 12:41:33.526411 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9225b01e-1067-47de-812a-d9be36adf9d0/object-expirer/0.log" Nov 25 12:41:33 crc kubenswrapper[4706]: I1125 12:41:33.531328 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9225b01e-1067-47de-812a-d9be36adf9d0/container-updater/0.log" Nov 25 12:41:33 crc kubenswrapper[4706]: I1125 12:41:33.638261 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9225b01e-1067-47de-812a-d9be36adf9d0/object-replicator/0.log" Nov 25 12:41:33 crc kubenswrapper[4706]: I1125 12:41:33.695633 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9225b01e-1067-47de-812a-d9be36adf9d0/object-server/0.log" Nov 25 12:41:33 crc kubenswrapper[4706]: I1125 12:41:33.726593 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9225b01e-1067-47de-812a-d9be36adf9d0/rsync/0.log" Nov 25 12:41:33 crc kubenswrapper[4706]: I1125 12:41:33.764745 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9225b01e-1067-47de-812a-d9be36adf9d0/object-updater/0.log" Nov 25 12:41:33 crc kubenswrapper[4706]: I1125 12:41:33.884342 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9225b01e-1067-47de-812a-d9be36adf9d0/swift-recon-cron/0.log" Nov 25 12:41:33 crc kubenswrapper[4706]: I1125 12:41:33.982078 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj_10becdf1-f704-46ec-aee6-b4ef4fdbed09/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 12:41:34 crc kubenswrapper[4706]: I1125 12:41:34.318870 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_a3e38444-7907-4d48-bc07-b6b7dc4854a8/tempest-tests-tempest-tests-runner/0.log" Nov 25 12:41:34 crc kubenswrapper[4706]: I1125 12:41:34.450151 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_586b9083-1af0-4687-886b-bdaf4041ba31/test-operator-logs-container/0.log" Nov 25 12:41:34 crc kubenswrapper[4706]: I1125 12:41:34.664345 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-2j66d_29e15319-39a4-4af6-869c-3f49b55997bc/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 12:41:34 crc kubenswrapper[4706]: I1125 12:41:34.921812 4706 scope.go:117] "RemoveContainer" containerID="f7d4f2bc57b2d7499bb910a36c7d647ec55fac45e9295616e11685165a93deff" Nov 25 12:41:34 crc kubenswrapper[4706]: E1125 12:41:34.922216 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:41:43 crc kubenswrapper[4706]: I1125 12:41:43.369081 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_37118d82-a55d-4a10-8b2c-6e5cf036474c/memcached/0.log" Nov 25 12:41:45 crc kubenswrapper[4706]: I1125 12:41:45.922370 4706 scope.go:117] "RemoveContainer" containerID="f7d4f2bc57b2d7499bb910a36c7d647ec55fac45e9295616e11685165a93deff" Nov 25 12:41:45 crc kubenswrapper[4706]: E1125 12:41:45.922917 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:41:56 crc kubenswrapper[4706]: I1125 12:41:56.923325 4706 scope.go:117] "RemoveContainer" containerID="f7d4f2bc57b2d7499bb910a36c7d647ec55fac45e9295616e11685165a93deff" Nov 25 12:41:56 crc kubenswrapper[4706]: E1125 12:41:56.924322 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:42:01 crc kubenswrapper[4706]: I1125 12:42:01.301284 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv_787337fb-0b33-488b-a1b5-c680273f2c5b/util/0.log" Nov 25 12:42:01 crc kubenswrapper[4706]: I1125 12:42:01.531440 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv_787337fb-0b33-488b-a1b5-c680273f2c5b/util/0.log" Nov 25 12:42:01 crc kubenswrapper[4706]: I1125 12:42:01.639374 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv_787337fb-0b33-488b-a1b5-c680273f2c5b/pull/0.log" Nov 25 12:42:01 crc kubenswrapper[4706]: I1125 12:42:01.655200 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv_787337fb-0b33-488b-a1b5-c680273f2c5b/pull/0.log" Nov 25 12:42:01 crc kubenswrapper[4706]: I1125 12:42:01.784492 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv_787337fb-0b33-488b-a1b5-c680273f2c5b/util/0.log" Nov 25 12:42:01 crc kubenswrapper[4706]: I1125 12:42:01.886754 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv_787337fb-0b33-488b-a1b5-c680273f2c5b/extract/0.log" Nov 25 12:42:01 crc kubenswrapper[4706]: I1125 12:42:01.888428 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv_787337fb-0b33-488b-a1b5-c680273f2c5b/pull/0.log" Nov 25 12:42:02 crc kubenswrapper[4706]: I1125 12:42:02.042947 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-jh5hc_23155e14-a775-48c5-adf9-55dcfd008040/kube-rbac-proxy/0.log" Nov 25 12:42:02 crc kubenswrapper[4706]: I1125 12:42:02.077033 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-jh5hc_23155e14-a775-48c5-adf9-55dcfd008040/manager/1.log" Nov 25 12:42:02 crc kubenswrapper[4706]: I1125 12:42:02.122206 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-jh5hc_23155e14-a775-48c5-adf9-55dcfd008040/manager/2.log" Nov 25 12:42:02 crc kubenswrapper[4706]: I1125 12:42:02.257888 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-4bsmv_ee655c82-6748-4bba-9da4-dcf73e0cff37/kube-rbac-proxy/0.log" Nov 25 12:42:02 crc kubenswrapper[4706]: I1125 12:42:02.305254 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-4bsmv_ee655c82-6748-4bba-9da4-dcf73e0cff37/manager/2.log" Nov 25 12:42:02 crc kubenswrapper[4706]: I1125 12:42:02.433225 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-4bsmv_ee655c82-6748-4bba-9da4-dcf73e0cff37/manager/1.log" Nov 25 12:42:02 crc kubenswrapper[4706]: I1125 12:42:02.503666 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-hqsp5_9fa65252-7bf5-4e83-beb7-dfcfa63db10d/kube-rbac-proxy/0.log" Nov 25 12:42:02 crc kubenswrapper[4706]: I1125 12:42:02.561351 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-hqsp5_9fa65252-7bf5-4e83-beb7-dfcfa63db10d/manager/2.log" Nov 25 12:42:02 crc kubenswrapper[4706]: I1125 12:42:02.677724 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-hqsp5_9fa65252-7bf5-4e83-beb7-dfcfa63db10d/manager/1.log" Nov 25 12:42:02 crc kubenswrapper[4706]: I1125 12:42:02.714309 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-t6c78_4857e509-acac-422c-87e8-2662708da599/kube-rbac-proxy/0.log" Nov 25 12:42:02 crc kubenswrapper[4706]: I1125 12:42:02.832067 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-t6c78_4857e509-acac-422c-87e8-2662708da599/manager/2.log" Nov 25 12:42:02 crc kubenswrapper[4706]: I1125 12:42:02.926981 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-t6c78_4857e509-acac-422c-87e8-2662708da599/manager/1.log" Nov 25 12:42:02 crc kubenswrapper[4706]: I1125 12:42:02.956827 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-9bz4f_c6de3b19-c207-4c00-8350-de810fb1f555/kube-rbac-proxy/0.log" Nov 25 12:42:03 crc kubenswrapper[4706]: I1125 12:42:03.037690 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-9bz4f_c6de3b19-c207-4c00-8350-de810fb1f555/manager/2.log" Nov 25 12:42:03 crc kubenswrapper[4706]: I1125 12:42:03.106567 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-9bz4f_c6de3b19-c207-4c00-8350-de810fb1f555/manager/1.log" Nov 25 12:42:03 crc kubenswrapper[4706]: I1125 12:42:03.158971 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-zx4v6_72bbe536-121d-47c0-b473-2974b238f271/kube-rbac-proxy/0.log" Nov 25 12:42:03 crc kubenswrapper[4706]: I1125 12:42:03.361801 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-zx4v6_72bbe536-121d-47c0-b473-2974b238f271/manager/2.log" Nov 25 12:42:03 crc kubenswrapper[4706]: I1125 12:42:03.466254 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-zx4v6_72bbe536-121d-47c0-b473-2974b238f271/manager/1.log" Nov 25 12:42:03 crc kubenswrapper[4706]: I1125 12:42:03.555196 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-rfz7f_e204aa88-c108-491e-9a73-2fca5c2ef15c/kube-rbac-proxy/0.log" Nov 25 12:42:03 crc kubenswrapper[4706]: I1125 12:42:03.603197 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-rfz7f_e204aa88-c108-491e-9a73-2fca5c2ef15c/manager/2.log" Nov 25 12:42:03 crc kubenswrapper[4706]: I1125 12:42:03.718561 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-rfz7f_e204aa88-c108-491e-9a73-2fca5c2ef15c/manager/1.log" Nov 25 12:42:03 crc kubenswrapper[4706]: I1125 12:42:03.733211 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-l4m6r_9e5a3424-dd89-4411-872f-70447506cf73/kube-rbac-proxy/0.log" Nov 25 12:42:03 crc kubenswrapper[4706]: I1125 12:42:03.807191 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-l4m6r_9e5a3424-dd89-4411-872f-70447506cf73/manager/2.log" Nov 25 12:42:03 crc kubenswrapper[4706]: I1125 12:42:03.936860 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-l4m6r_9e5a3424-dd89-4411-872f-70447506cf73/manager/1.log" Nov 25 12:42:04 crc kubenswrapper[4706]: I1125 12:42:04.012191 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-nf6gr_6c41fff9-feeb-4311-a7ce-7da3a71b3e9c/kube-rbac-proxy/0.log" Nov 25 12:42:04 crc kubenswrapper[4706]: I1125 12:42:04.103153 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-nf6gr_6c41fff9-feeb-4311-a7ce-7da3a71b3e9c/manager/2.log" Nov 25 12:42:04 crc kubenswrapper[4706]: I1125 12:42:04.180134 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-nf6gr_6c41fff9-feeb-4311-a7ce-7da3a71b3e9c/manager/1.log" Nov 25 12:42:04 crc kubenswrapper[4706]: I1125 12:42:04.265762 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-fslzs_70fa0d16-065a-463f-8198-06a03414a128/kube-rbac-proxy/0.log" Nov 25 12:42:04 crc kubenswrapper[4706]: I1125 12:42:04.309796 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-fslzs_70fa0d16-065a-463f-8198-06a03414a128/manager/2.log" Nov 25 12:42:04 crc kubenswrapper[4706]: I1125 12:42:04.405914 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-fslzs_70fa0d16-065a-463f-8198-06a03414a128/manager/1.log" Nov 25 12:42:04 crc kubenswrapper[4706]: I1125 12:42:04.491950 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-bpcjw_62e72e86-38e3-4acc-8aa1-664684f27760/kube-rbac-proxy/0.log" Nov 25 12:42:04 crc kubenswrapper[4706]: I1125 12:42:04.516040 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-bpcjw_62e72e86-38e3-4acc-8aa1-664684f27760/manager/2.log" Nov 25 12:42:04 crc kubenswrapper[4706]: I1125 12:42:04.628377 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-bpcjw_62e72e86-38e3-4acc-8aa1-664684f27760/manager/1.log" Nov 25 12:42:04 crc kubenswrapper[4706]: I1125 12:42:04.721482 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-tfn29_3c582966-ab32-499d-8f1c-95c942dd6bb4/kube-rbac-proxy/0.log" Nov 25 12:42:04 crc kubenswrapper[4706]: I1125 12:42:04.806336 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-tfn29_3c582966-ab32-499d-8f1c-95c942dd6bb4/manager/2.log" Nov 25 12:42:04 crc kubenswrapper[4706]: I1125 12:42:04.837026 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-tfn29_3c582966-ab32-499d-8f1c-95c942dd6bb4/manager/1.log" Nov 25 12:42:04 crc kubenswrapper[4706]: I1125 12:42:04.925716 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-f47gl_1c035858-a349-4415-8a5d-f3f2edb7c84e/kube-rbac-proxy/0.log" Nov 25 12:42:05 crc kubenswrapper[4706]: I1125 12:42:05.002459 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-f47gl_1c035858-a349-4415-8a5d-f3f2edb7c84e/manager/2.log" Nov 25 12:42:05 crc kubenswrapper[4706]: I1125 12:42:05.058019 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-f47gl_1c035858-a349-4415-8a5d-f3f2edb7c84e/manager/1.log" Nov 25 12:42:05 crc kubenswrapper[4706]: I1125 12:42:05.147786 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-2tmzq_063b2f44-faa1-4a58-b77b-f2140f569b01/kube-rbac-proxy/0.log" Nov 25 12:42:05 crc kubenswrapper[4706]: I1125 12:42:05.185145 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-2tmzq_063b2f44-faa1-4a58-b77b-f2140f569b01/manager/2.log" Nov 25 12:42:05 crc kubenswrapper[4706]: I1125 12:42:05.191691 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-2tmzq_063b2f44-faa1-4a58-b77b-f2140f569b01/manager/1.log" Nov 25 12:42:05 crc kubenswrapper[4706]: I1125 12:42:05.303726 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk_e318ee27-6b61-4c03-b697-782b25461b09/kube-rbac-proxy/0.log" Nov 25 12:42:05 crc kubenswrapper[4706]: I1125 12:42:05.364331 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk_e318ee27-6b61-4c03-b697-782b25461b09/manager/1.log" Nov 25 12:42:05 crc kubenswrapper[4706]: I1125 12:42:05.427733 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk_e318ee27-6b61-4c03-b697-782b25461b09/manager/0.log" Nov 25 12:42:05 crc kubenswrapper[4706]: I1125 12:42:05.513130 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-9cb9fb586-5854z_2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1/manager/1.log" Nov 25 12:42:05 crc kubenswrapper[4706]: I1125 12:42:05.677477 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5789f9b844-cfvkd_2df5f121-0564-4647-acf6-d09283ff5a94/operator/1.log" Nov 25 12:42:05 crc kubenswrapper[4706]: I1125 12:42:05.776734 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5789f9b844-cfvkd_2df5f121-0564-4647-acf6-d09283ff5a94/operator/0.log" Nov 25 12:42:05 crc kubenswrapper[4706]: I1125 12:42:05.818928 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-g64cw_fa3da9d1-2214-4436-951b-2f2ec4c05104/registry-server/0.log" Nov 25 12:42:05 crc kubenswrapper[4706]: I1125 12:42:05.854770 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-9cb9fb586-5854z_2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1/manager/2.log" Nov 25 12:42:06 crc kubenswrapper[4706]: I1125 12:42:06.136252 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-nc6f7_61b1ec50-3228-43bc-bb09-d74a7f02be52/manager/2.log" Nov 25 12:42:06 crc kubenswrapper[4706]: I1125 12:42:06.137995 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-nc6f7_61b1ec50-3228-43bc-bb09-d74a7f02be52/kube-rbac-proxy/0.log" Nov 25 12:42:06 crc kubenswrapper[4706]: I1125 12:42:06.156338 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-nc6f7_61b1ec50-3228-43bc-bb09-d74a7f02be52/manager/1.log" Nov 25 12:42:06 crc kubenswrapper[4706]: I1125 12:42:06.255151 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-k7crl_eab1279c-c99a-450e-887b-d246a2ff01aa/kube-rbac-proxy/0.log" Nov 25 12:42:06 crc kubenswrapper[4706]: I1125 12:42:06.319893 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-k7crl_eab1279c-c99a-450e-887b-d246a2ff01aa/manager/2.log" Nov 25 12:42:06 crc kubenswrapper[4706]: I1125 12:42:06.330228 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-k7crl_eab1279c-c99a-450e-887b-d246a2ff01aa/manager/1.log" Nov 25 12:42:06 crc kubenswrapper[4706]: I1125 12:42:06.354280 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-x9x4q_5726a389-32eb-4f0c-938b-6f2ddbb762e7/operator/2.log" Nov 25 12:42:06 crc kubenswrapper[4706]: I1125 12:42:06.497123 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-x9x4q_5726a389-32eb-4f0c-938b-6f2ddbb762e7/operator/1.log" Nov 25 12:42:06 crc kubenswrapper[4706]: I1125 12:42:06.557901 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-rwbvj_a0668604-b184-4265-b9af-fc6f526d8351/kube-rbac-proxy/0.log" Nov 25 12:42:06 crc kubenswrapper[4706]: I1125 12:42:06.607504 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-rwbvj_a0668604-b184-4265-b9af-fc6f526d8351/manager/2.log" Nov 25 12:42:06 crc kubenswrapper[4706]: I1125 12:42:06.661682 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-rwbvj_a0668604-b184-4265-b9af-fc6f526d8351/manager/1.log" Nov 25 12:42:06 crc kubenswrapper[4706]: I1125 12:42:06.754483 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-8p5t2_a7a52f28-6bc4-481d-8513-16dbb7b37ae1/manager/2.log" Nov 25 12:42:06 crc kubenswrapper[4706]: I1125 12:42:06.761082 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-8p5t2_a7a52f28-6bc4-481d-8513-16dbb7b37ae1/kube-rbac-proxy/0.log" Nov 25 12:42:06 crc kubenswrapper[4706]: I1125 12:42:06.838158 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-8p5t2_a7a52f28-6bc4-481d-8513-16dbb7b37ae1/manager/1.log" Nov 25 12:42:06 crc kubenswrapper[4706]: I1125 12:42:06.925331 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-8rlr7_d256078e-afd5-4218-ad5c-d5211eb846a8/kube-rbac-proxy/0.log" Nov 25 12:42:06 crc kubenswrapper[4706]: I1125 12:42:06.967698 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-8rlr7_d256078e-afd5-4218-ad5c-d5211eb846a8/manager/1.log" Nov 25 12:42:07 crc kubenswrapper[4706]: I1125 12:42:07.005894 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-8rlr7_d256078e-afd5-4218-ad5c-d5211eb846a8/manager/0.log" Nov 25 12:42:07 crc kubenswrapper[4706]: I1125 12:42:07.095459 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-9s7hm_6b8e15c0-a70f-4b4c-8836-a2c4e7b23f60/kube-rbac-proxy/0.log" Nov 25 12:42:07 crc kubenswrapper[4706]: I1125 12:42:07.139891 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-9s7hm_6b8e15c0-a70f-4b4c-8836-a2c4e7b23f60/manager/2.log" Nov 25 12:42:07 crc kubenswrapper[4706]: I1125 12:42:07.214272 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-9s7hm_6b8e15c0-a70f-4b4c-8836-a2c4e7b23f60/manager/1.log" Nov 25 12:42:07 crc kubenswrapper[4706]: I1125 12:42:07.922981 4706 scope.go:117] "RemoveContainer" containerID="f7d4f2bc57b2d7499bb910a36c7d647ec55fac45e9295616e11685165a93deff" Nov 25 12:42:07 crc kubenswrapper[4706]: E1125 12:42:07.923329 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:42:19 crc kubenswrapper[4706]: I1125 12:42:19.922108 4706 scope.go:117] "RemoveContainer" containerID="f7d4f2bc57b2d7499bb910a36c7d647ec55fac45e9295616e11685165a93deff" Nov 25 12:42:19 crc kubenswrapper[4706]: E1125 12:42:19.923220 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:42:25 crc kubenswrapper[4706]: I1125 12:42:25.895835 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-hhh7q_825f088d-44aa-4f48-b95d-6245da5b1775/control-plane-machine-set-operator/0.log" Nov 25 12:42:26 crc kubenswrapper[4706]: I1125 12:42:26.058876 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-9z28x_ab2dd029-844e-4783-8fda-bfab6a6d9243/kube-rbac-proxy/0.log" Nov 25 12:42:26 crc kubenswrapper[4706]: I1125 12:42:26.095823 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-9z28x_ab2dd029-844e-4783-8fda-bfab6a6d9243/machine-api-operator/0.log" Nov 25 12:42:32 crc kubenswrapper[4706]: I1125 12:42:32.922943 4706 scope.go:117] "RemoveContainer" containerID="f7d4f2bc57b2d7499bb910a36c7d647ec55fac45e9295616e11685165a93deff" Nov 25 12:42:32 crc kubenswrapper[4706]: E1125 12:42:32.923809 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:42:39 crc kubenswrapper[4706]: I1125 12:42:39.016261 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-qv4vk_a9733b54-d1c6-48b7-9e7f-4c09ed97b604/cert-manager-controller/0.log" Nov 25 12:42:39 crc kubenswrapper[4706]: I1125 12:42:39.160678 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-8qfjm_96496646-6a16-483a-a71d-c6debd0e44d7/cert-manager-cainjector/0.log" Nov 25 12:42:39 crc kubenswrapper[4706]: I1125 12:42:39.226893 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-bk58z_3a171d39-2023-41e0-b928-710c5b9eff19/cert-manager-webhook/0.log" Nov 25 12:42:44 crc kubenswrapper[4706]: I1125 12:42:44.922073 4706 scope.go:117] "RemoveContainer" containerID="f7d4f2bc57b2d7499bb910a36c7d647ec55fac45e9295616e11685165a93deff" Nov 25 12:42:44 crc kubenswrapper[4706]: E1125 12:42:44.922778 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:42:52 crc kubenswrapper[4706]: I1125 12:42:52.352648 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-4k4ff_502cb16b-4f8d-47ba-96a0-41e42768fe63/nmstate-console-plugin/0.log" Nov 25 12:42:52 crc kubenswrapper[4706]: I1125 12:42:52.545763 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-rd4nq_a206555f-6ea8-4dbc-83db-801c57226c13/kube-rbac-proxy/0.log" Nov 25 12:42:52 crc kubenswrapper[4706]: I1125 12:42:52.568792 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-qkksf_2454859f-90ab-4942-a300-36e465597289/nmstate-handler/0.log" Nov 25 12:42:52 crc kubenswrapper[4706]: I1125 12:42:52.642981 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-rd4nq_a206555f-6ea8-4dbc-83db-801c57226c13/nmstate-metrics/0.log" Nov 25 12:42:52 crc kubenswrapper[4706]: I1125 12:42:52.775243 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-4wx96_e4a0ddea-a6b5-456d-9243-3a7576fcdac8/nmstate-operator/0.log" Nov 25 12:42:52 crc kubenswrapper[4706]: I1125 12:42:52.873179 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-k7vl7_9220b323-ff51-4a2d-95fc-dc3274e8fbeb/nmstate-webhook/0.log" Nov 25 12:42:59 crc kubenswrapper[4706]: I1125 12:42:59.923507 4706 scope.go:117] "RemoveContainer" containerID="f7d4f2bc57b2d7499bb910a36c7d647ec55fac45e9295616e11685165a93deff" Nov 25 12:42:59 crc kubenswrapper[4706]: E1125 12:42:59.924272 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:43:07 crc kubenswrapper[4706]: I1125 12:43:07.434902 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-5gnwd_67dd43bc-7fe1-4585-8fc3-2d2a52b8c974/kube-rbac-proxy/0.log" Nov 25 12:43:07 crc kubenswrapper[4706]: I1125 12:43:07.626186 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-5gnwd_67dd43bc-7fe1-4585-8fc3-2d2a52b8c974/controller/0.log" Nov 25 12:43:07 crc kubenswrapper[4706]: I1125 12:43:07.998218 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gfpwp_4fe1be78-8453-460d-abc1-7c4b89923fe5/cp-frr-files/0.log" Nov 25 12:43:08 crc kubenswrapper[4706]: I1125 12:43:08.144414 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gfpwp_4fe1be78-8453-460d-abc1-7c4b89923fe5/cp-frr-files/0.log" Nov 25 12:43:08 crc kubenswrapper[4706]: I1125 12:43:08.146641 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gfpwp_4fe1be78-8453-460d-abc1-7c4b89923fe5/cp-reloader/0.log" Nov 25 12:43:08 crc kubenswrapper[4706]: I1125 12:43:08.152753 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gfpwp_4fe1be78-8453-460d-abc1-7c4b89923fe5/cp-reloader/0.log" Nov 25 12:43:08 crc kubenswrapper[4706]: I1125 12:43:08.195374 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gfpwp_4fe1be78-8453-460d-abc1-7c4b89923fe5/cp-metrics/0.log" Nov 25 12:43:08 crc kubenswrapper[4706]: I1125 12:43:08.365627 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gfpwp_4fe1be78-8453-460d-abc1-7c4b89923fe5/cp-frr-files/0.log" Nov 25 12:43:08 crc kubenswrapper[4706]: I1125 12:43:08.386040 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gfpwp_4fe1be78-8453-460d-abc1-7c4b89923fe5/cp-reloader/0.log" Nov 25 12:43:08 crc kubenswrapper[4706]: I1125 12:43:08.391968 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gfpwp_4fe1be78-8453-460d-abc1-7c4b89923fe5/cp-metrics/0.log" Nov 25 12:43:08 crc kubenswrapper[4706]: I1125 12:43:08.431528 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gfpwp_4fe1be78-8453-460d-abc1-7c4b89923fe5/cp-metrics/0.log" Nov 25 12:43:08 crc kubenswrapper[4706]: I1125 12:43:08.570018 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gfpwp_4fe1be78-8453-460d-abc1-7c4b89923fe5/cp-reloader/0.log" Nov 25 12:43:08 crc kubenswrapper[4706]: I1125 12:43:08.579028 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gfpwp_4fe1be78-8453-460d-abc1-7c4b89923fe5/cp-frr-files/0.log" Nov 25 12:43:08 crc kubenswrapper[4706]: I1125 12:43:08.585801 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gfpwp_4fe1be78-8453-460d-abc1-7c4b89923fe5/cp-metrics/0.log" Nov 25 12:43:08 crc kubenswrapper[4706]: I1125 12:43:08.665001 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gfpwp_4fe1be78-8453-460d-abc1-7c4b89923fe5/controller/0.log" Nov 25 12:43:08 crc kubenswrapper[4706]: I1125 12:43:08.797361 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gfpwp_4fe1be78-8453-460d-abc1-7c4b89923fe5/kube-rbac-proxy/0.log" Nov 25 12:43:08 crc kubenswrapper[4706]: I1125 12:43:08.825203 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gfpwp_4fe1be78-8453-460d-abc1-7c4b89923fe5/frr-metrics/0.log" Nov 25 12:43:08 crc kubenswrapper[4706]: I1125 12:43:08.854451 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gfpwp_4fe1be78-8453-460d-abc1-7c4b89923fe5/kube-rbac-proxy-frr/0.log" Nov 25 12:43:09 crc kubenswrapper[4706]: I1125 12:43:09.040474 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-9gk5w_d6a1f7a2-b220-49a7-b12a-8cc3cf093dbc/frr-k8s-webhook-server/0.log" Nov 25 12:43:09 crc kubenswrapper[4706]: I1125 12:43:09.068192 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gfpwp_4fe1be78-8453-460d-abc1-7c4b89923fe5/reloader/0.log" Nov 25 12:43:09 crc kubenswrapper[4706]: I1125 12:43:09.823948 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7d76b4f6c7-xxkgj_cdb2d830-fbc9-4336-83b7-0392051670cb/manager/2.log" Nov 25 12:43:09 crc kubenswrapper[4706]: I1125 12:43:09.837850 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7d76b4f6c7-xxkgj_cdb2d830-fbc9-4336-83b7-0392051670cb/manager/3.log" Nov 25 12:43:10 crc kubenswrapper[4706]: I1125 12:43:10.059457 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7c9ff6b49c-x86mq_2cb3fa9d-f614-42af-80c5-deb2e1fdb90d/webhook-server/0.log" Nov 25 12:43:10 crc kubenswrapper[4706]: I1125 12:43:10.217609 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2w52p_5570c11b-30c6-4ba6-adb5-3fc12ca26ae9/kube-rbac-proxy/0.log" Nov 25 12:43:10 crc kubenswrapper[4706]: I1125 12:43:10.265473 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gfpwp_4fe1be78-8453-460d-abc1-7c4b89923fe5/frr/0.log" Nov 25 12:43:10 crc kubenswrapper[4706]: I1125 12:43:10.658470 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2w52p_5570c11b-30c6-4ba6-adb5-3fc12ca26ae9/speaker/0.log" Nov 25 12:43:10 crc kubenswrapper[4706]: I1125 12:43:10.923091 4706 scope.go:117] "RemoveContainer" containerID="f7d4f2bc57b2d7499bb910a36c7d647ec55fac45e9295616e11685165a93deff" Nov 25 12:43:11 crc kubenswrapper[4706]: I1125 12:43:11.708917 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" event={"ID":"0930887a-320c-4506-8c9c-f94d6d64516a","Type":"ContainerStarted","Data":"ec124d7ca75771b4c4c8fe512ca2efc5a14229d016e5175e85c0e297e332d27e"} Nov 25 12:43:22 crc kubenswrapper[4706]: I1125 12:43:22.836411 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc_05fa0078-a8e0-4b75-a7a8-d5ec5f21e034/util/0.log" Nov 25 12:43:23 crc kubenswrapper[4706]: I1125 12:43:23.098801 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc_05fa0078-a8e0-4b75-a7a8-d5ec5f21e034/util/0.log" Nov 25 12:43:23 crc kubenswrapper[4706]: I1125 12:43:23.118103 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc_05fa0078-a8e0-4b75-a7a8-d5ec5f21e034/pull/0.log" Nov 25 12:43:23 crc kubenswrapper[4706]: I1125 12:43:23.130337 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc_05fa0078-a8e0-4b75-a7a8-d5ec5f21e034/pull/0.log" Nov 25 12:43:23 crc kubenswrapper[4706]: I1125 12:43:23.265401 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc_05fa0078-a8e0-4b75-a7a8-d5ec5f21e034/util/0.log" Nov 25 12:43:23 crc kubenswrapper[4706]: I1125 12:43:23.320559 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc_05fa0078-a8e0-4b75-a7a8-d5ec5f21e034/pull/0.log" Nov 25 12:43:23 crc kubenswrapper[4706]: I1125 12:43:23.340174 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc_05fa0078-a8e0-4b75-a7a8-d5ec5f21e034/extract/0.log" Nov 25 12:43:23 crc kubenswrapper[4706]: I1125 12:43:23.482457 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-k7lhm_f25c7d8b-b341-4fb2-bef0-e43d83905a9b/extract-utilities/0.log" Nov 25 12:43:23 crc kubenswrapper[4706]: I1125 12:43:23.701714 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-k7lhm_f25c7d8b-b341-4fb2-bef0-e43d83905a9b/extract-content/0.log" Nov 25 12:43:23 crc kubenswrapper[4706]: I1125 12:43:23.702254 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-k7lhm_f25c7d8b-b341-4fb2-bef0-e43d83905a9b/extract-content/0.log" Nov 25 12:43:23 crc kubenswrapper[4706]: I1125 12:43:23.718136 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-k7lhm_f25c7d8b-b341-4fb2-bef0-e43d83905a9b/extract-utilities/0.log" Nov 25 12:43:23 crc kubenswrapper[4706]: I1125 12:43:23.862532 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-k7lhm_f25c7d8b-b341-4fb2-bef0-e43d83905a9b/extract-utilities/0.log" Nov 25 12:43:23 crc kubenswrapper[4706]: I1125 12:43:23.877858 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-k7lhm_f25c7d8b-b341-4fb2-bef0-e43d83905a9b/extract-content/0.log" Nov 25 12:43:24 crc kubenswrapper[4706]: I1125 12:43:24.121576 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fq7cn_8e544967-24c9-4190-a1d7-5ed07fdaaeef/extract-utilities/0.log" Nov 25 12:43:24 crc kubenswrapper[4706]: I1125 12:43:24.484805 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fq7cn_8e544967-24c9-4190-a1d7-5ed07fdaaeef/extract-content/0.log" Nov 25 12:43:24 crc kubenswrapper[4706]: I1125 12:43:24.500497 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fq7cn_8e544967-24c9-4190-a1d7-5ed07fdaaeef/extract-utilities/0.log" Nov 25 12:43:24 crc kubenswrapper[4706]: I1125 12:43:24.538660 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fq7cn_8e544967-24c9-4190-a1d7-5ed07fdaaeef/extract-content/0.log" Nov 25 12:43:24 crc kubenswrapper[4706]: I1125 12:43:24.563111 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-k7lhm_f25c7d8b-b341-4fb2-bef0-e43d83905a9b/registry-server/0.log" Nov 25 12:43:24 crc kubenswrapper[4706]: I1125 12:43:24.696381 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fq7cn_8e544967-24c9-4190-a1d7-5ed07fdaaeef/extract-utilities/0.log" Nov 25 12:43:24 crc kubenswrapper[4706]: I1125 12:43:24.715978 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fq7cn_8e544967-24c9-4190-a1d7-5ed07fdaaeef/extract-content/0.log" Nov 25 12:43:24 crc kubenswrapper[4706]: I1125 12:43:24.923053 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn_8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532/util/0.log" Nov 25 12:43:25 crc kubenswrapper[4706]: I1125 12:43:25.048028 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn_8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532/util/0.log" Nov 25 12:43:25 crc kubenswrapper[4706]: I1125 12:43:25.184449 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn_8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532/pull/0.log" Nov 25 12:43:25 crc kubenswrapper[4706]: I1125 12:43:25.223655 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn_8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532/pull/0.log" Nov 25 12:43:25 crc kubenswrapper[4706]: I1125 12:43:25.468037 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn_8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532/pull/0.log" Nov 25 12:43:25 crc kubenswrapper[4706]: I1125 12:43:25.477274 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn_8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532/extract/0.log" Nov 25 12:43:25 crc kubenswrapper[4706]: I1125 12:43:25.479126 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fq7cn_8e544967-24c9-4190-a1d7-5ed07fdaaeef/registry-server/0.log" Nov 25 12:43:25 crc kubenswrapper[4706]: I1125 12:43:25.483213 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn_8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532/util/0.log" Nov 25 12:43:25 crc kubenswrapper[4706]: I1125 12:43:25.685155 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-vnd8s_57792378-6c0b-415c-aeb2-4cbb2c3c1702/marketplace-operator/0.log" Nov 25 12:43:25 crc kubenswrapper[4706]: I1125 12:43:25.703394 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q9pfj_ade36961-cf56-40fd-9d5b-202d3e937bfd/extract-utilities/0.log" Nov 25 12:43:25 crc kubenswrapper[4706]: I1125 12:43:25.894790 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q9pfj_ade36961-cf56-40fd-9d5b-202d3e937bfd/extract-content/0.log" Nov 25 12:43:25 crc kubenswrapper[4706]: I1125 12:43:25.907902 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q9pfj_ade36961-cf56-40fd-9d5b-202d3e937bfd/extract-utilities/0.log" Nov 25 12:43:25 crc kubenswrapper[4706]: I1125 12:43:25.941027 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q9pfj_ade36961-cf56-40fd-9d5b-202d3e937bfd/extract-content/0.log" Nov 25 12:43:26 crc kubenswrapper[4706]: I1125 12:43:26.127369 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q9pfj_ade36961-cf56-40fd-9d5b-202d3e937bfd/extract-content/0.log" Nov 25 12:43:26 crc kubenswrapper[4706]: I1125 12:43:26.171443 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q9pfj_ade36961-cf56-40fd-9d5b-202d3e937bfd/extract-utilities/0.log" Nov 25 12:43:26 crc kubenswrapper[4706]: I1125 12:43:26.247892 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q9pfj_ade36961-cf56-40fd-9d5b-202d3e937bfd/registry-server/0.log" Nov 25 12:43:26 crc kubenswrapper[4706]: I1125 12:43:26.371556 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hcv5z_3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9/extract-utilities/0.log" Nov 25 12:43:26 crc kubenswrapper[4706]: I1125 12:43:26.541699 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hcv5z_3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9/extract-content/0.log" Nov 25 12:43:26 crc kubenswrapper[4706]: I1125 12:43:26.601615 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hcv5z_3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9/extract-utilities/0.log" Nov 25 12:43:26 crc kubenswrapper[4706]: I1125 12:43:26.625874 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hcv5z_3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9/extract-content/0.log" Nov 25 12:43:26 crc kubenswrapper[4706]: I1125 12:43:26.787898 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hcv5z_3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9/extract-utilities/0.log" Nov 25 12:43:26 crc kubenswrapper[4706]: I1125 12:43:26.800160 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hcv5z_3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9/extract-content/0.log" Nov 25 12:43:27 crc kubenswrapper[4706]: I1125 12:43:27.288240 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hcv5z_3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9/registry-server/0.log" Nov 25 12:43:55 crc kubenswrapper[4706]: E1125 12:43:55.852643 4706 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.13:56862->38.102.83.13:39835: write tcp 38.102.83.13:56862->38.102.83.13:39835: write: broken pipe Nov 25 12:44:15 crc kubenswrapper[4706]: I1125 12:44:15.942854 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g2cng"] Nov 25 12:44:15 crc kubenswrapper[4706]: E1125 12:44:15.943926 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d066d0e-0894-40a4-94df-d503e2b2cbf2" containerName="container-00" Nov 25 12:44:15 crc kubenswrapper[4706]: I1125 12:44:15.943946 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d066d0e-0894-40a4-94df-d503e2b2cbf2" containerName="container-00" Nov 25 12:44:15 crc kubenswrapper[4706]: I1125 12:44:15.944242 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d066d0e-0894-40a4-94df-d503e2b2cbf2" containerName="container-00" Nov 25 12:44:15 crc kubenswrapper[4706]: I1125 12:44:15.945881 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g2cng" Nov 25 12:44:15 crc kubenswrapper[4706]: I1125 12:44:15.963251 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g2cng"] Nov 25 12:44:16 crc kubenswrapper[4706]: I1125 12:44:16.037789 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7134d13f-ed64-4c99-b4e3-60cce051c14e-utilities\") pod \"redhat-marketplace-g2cng\" (UID: \"7134d13f-ed64-4c99-b4e3-60cce051c14e\") " pod="openshift-marketplace/redhat-marketplace-g2cng" Nov 25 12:44:16 crc kubenswrapper[4706]: I1125 12:44:16.037875 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7134d13f-ed64-4c99-b4e3-60cce051c14e-catalog-content\") pod \"redhat-marketplace-g2cng\" (UID: \"7134d13f-ed64-4c99-b4e3-60cce051c14e\") " pod="openshift-marketplace/redhat-marketplace-g2cng" Nov 25 12:44:16 crc kubenswrapper[4706]: I1125 12:44:16.037928 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjtsh\" (UniqueName: \"kubernetes.io/projected/7134d13f-ed64-4c99-b4e3-60cce051c14e-kube-api-access-qjtsh\") pod \"redhat-marketplace-g2cng\" (UID: \"7134d13f-ed64-4c99-b4e3-60cce051c14e\") " pod="openshift-marketplace/redhat-marketplace-g2cng" Nov 25 12:44:16 crc kubenswrapper[4706]: I1125 12:44:16.141200 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7134d13f-ed64-4c99-b4e3-60cce051c14e-catalog-content\") pod \"redhat-marketplace-g2cng\" (UID: \"7134d13f-ed64-4c99-b4e3-60cce051c14e\") " pod="openshift-marketplace/redhat-marketplace-g2cng" Nov 25 12:44:16 crc kubenswrapper[4706]: I1125 12:44:16.140455 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7134d13f-ed64-4c99-b4e3-60cce051c14e-catalog-content\") pod \"redhat-marketplace-g2cng\" (UID: \"7134d13f-ed64-4c99-b4e3-60cce051c14e\") " pod="openshift-marketplace/redhat-marketplace-g2cng" Nov 25 12:44:16 crc kubenswrapper[4706]: I1125 12:44:16.141641 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjtsh\" (UniqueName: \"kubernetes.io/projected/7134d13f-ed64-4c99-b4e3-60cce051c14e-kube-api-access-qjtsh\") pod \"redhat-marketplace-g2cng\" (UID: \"7134d13f-ed64-4c99-b4e3-60cce051c14e\") " pod="openshift-marketplace/redhat-marketplace-g2cng" Nov 25 12:44:16 crc kubenswrapper[4706]: I1125 12:44:16.142612 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7134d13f-ed64-4c99-b4e3-60cce051c14e-utilities\") pod \"redhat-marketplace-g2cng\" (UID: \"7134d13f-ed64-4c99-b4e3-60cce051c14e\") " pod="openshift-marketplace/redhat-marketplace-g2cng" Nov 25 12:44:16 crc kubenswrapper[4706]: I1125 12:44:16.142969 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7134d13f-ed64-4c99-b4e3-60cce051c14e-utilities\") pod \"redhat-marketplace-g2cng\" (UID: \"7134d13f-ed64-4c99-b4e3-60cce051c14e\") " pod="openshift-marketplace/redhat-marketplace-g2cng" Nov 25 12:44:16 crc kubenswrapper[4706]: I1125 12:44:16.608081 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjtsh\" (UniqueName: \"kubernetes.io/projected/7134d13f-ed64-4c99-b4e3-60cce051c14e-kube-api-access-qjtsh\") pod \"redhat-marketplace-g2cng\" (UID: \"7134d13f-ed64-4c99-b4e3-60cce051c14e\") " pod="openshift-marketplace/redhat-marketplace-g2cng" Nov 25 12:44:16 crc kubenswrapper[4706]: I1125 12:44:16.868743 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g2cng" Nov 25 12:44:17 crc kubenswrapper[4706]: I1125 12:44:17.308158 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g2cng"] Nov 25 12:44:17 crc kubenswrapper[4706]: I1125 12:44:17.327564 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g2cng" event={"ID":"7134d13f-ed64-4c99-b4e3-60cce051c14e","Type":"ContainerStarted","Data":"ceba066a64612d4730af8da3df9108ad5af7ba8be0c9f2e79cbd7bd086f66a0e"} Nov 25 12:44:18 crc kubenswrapper[4706]: I1125 12:44:18.336701 4706 generic.go:334] "Generic (PLEG): container finished" podID="7134d13f-ed64-4c99-b4e3-60cce051c14e" containerID="045d2d8f1367abd49bbf3a635ac32bfb8d9acc390c8e7d012e66d72045a59268" exitCode=0 Nov 25 12:44:18 crc kubenswrapper[4706]: I1125 12:44:18.337645 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g2cng" event={"ID":"7134d13f-ed64-4c99-b4e3-60cce051c14e","Type":"ContainerDied","Data":"045d2d8f1367abd49bbf3a635ac32bfb8d9acc390c8e7d012e66d72045a59268"} Nov 25 12:44:20 crc kubenswrapper[4706]: I1125 12:44:20.355390 4706 generic.go:334] "Generic (PLEG): container finished" podID="7134d13f-ed64-4c99-b4e3-60cce051c14e" containerID="623ba7bde120849a043e1c105bb4174e0ca0add4e0798caa9a33cea3e1ef0514" exitCode=0 Nov 25 12:44:20 crc kubenswrapper[4706]: I1125 12:44:20.355515 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g2cng" event={"ID":"7134d13f-ed64-4c99-b4e3-60cce051c14e","Type":"ContainerDied","Data":"623ba7bde120849a043e1c105bb4174e0ca0add4e0798caa9a33cea3e1ef0514"} Nov 25 12:44:21 crc kubenswrapper[4706]: I1125 12:44:21.366542 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g2cng" event={"ID":"7134d13f-ed64-4c99-b4e3-60cce051c14e","Type":"ContainerStarted","Data":"f472c6d13a805bcfc4d5455ad5c53043544f3160bb1f7a0906117ac038c0efcc"} Nov 25 12:44:21 crc kubenswrapper[4706]: I1125 12:44:21.391759 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g2cng" podStartSLOduration=3.9817914930000002 podStartE2EDuration="6.391737586s" podCreationTimestamp="2025-11-25 12:44:15 +0000 UTC" firstStartedPulling="2025-11-25 12:44:18.339514176 +0000 UTC m=+4067.254071557" lastFinishedPulling="2025-11-25 12:44:20.749460279 +0000 UTC m=+4069.664017650" observedRunningTime="2025-11-25 12:44:21.387038357 +0000 UTC m=+4070.301595768" watchObservedRunningTime="2025-11-25 12:44:21.391737586 +0000 UTC m=+4070.306294987" Nov 25 12:44:26 crc kubenswrapper[4706]: I1125 12:44:26.869943 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g2cng" Nov 25 12:44:26 crc kubenswrapper[4706]: I1125 12:44:26.870608 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g2cng" Nov 25 12:44:27 crc kubenswrapper[4706]: I1125 12:44:27.247300 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g2cng" Nov 25 12:44:27 crc kubenswrapper[4706]: I1125 12:44:27.468830 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g2cng" Nov 25 12:44:27 crc kubenswrapper[4706]: I1125 12:44:27.524362 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g2cng"] Nov 25 12:44:29 crc kubenswrapper[4706]: I1125 12:44:29.441074 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g2cng" podUID="7134d13f-ed64-4c99-b4e3-60cce051c14e" containerName="registry-server" containerID="cri-o://f472c6d13a805bcfc4d5455ad5c53043544f3160bb1f7a0906117ac038c0efcc" gracePeriod=2 Nov 25 12:44:29 crc kubenswrapper[4706]: I1125 12:44:29.913060 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g2cng" Nov 25 12:44:30 crc kubenswrapper[4706]: I1125 12:44:30.033220 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7134d13f-ed64-4c99-b4e3-60cce051c14e-catalog-content\") pod \"7134d13f-ed64-4c99-b4e3-60cce051c14e\" (UID: \"7134d13f-ed64-4c99-b4e3-60cce051c14e\") " Nov 25 12:44:30 crc kubenswrapper[4706]: I1125 12:44:30.033637 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7134d13f-ed64-4c99-b4e3-60cce051c14e-utilities\") pod \"7134d13f-ed64-4c99-b4e3-60cce051c14e\" (UID: \"7134d13f-ed64-4c99-b4e3-60cce051c14e\") " Nov 25 12:44:30 crc kubenswrapper[4706]: I1125 12:44:30.033993 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjtsh\" (UniqueName: \"kubernetes.io/projected/7134d13f-ed64-4c99-b4e3-60cce051c14e-kube-api-access-qjtsh\") pod \"7134d13f-ed64-4c99-b4e3-60cce051c14e\" (UID: \"7134d13f-ed64-4c99-b4e3-60cce051c14e\") " Nov 25 12:44:30 crc kubenswrapper[4706]: I1125 12:44:30.034246 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7134d13f-ed64-4c99-b4e3-60cce051c14e-utilities" (OuterVolumeSpecName: "utilities") pod "7134d13f-ed64-4c99-b4e3-60cce051c14e" (UID: "7134d13f-ed64-4c99-b4e3-60cce051c14e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:44:30 crc kubenswrapper[4706]: I1125 12:44:30.035213 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7134d13f-ed64-4c99-b4e3-60cce051c14e-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:44:30 crc kubenswrapper[4706]: I1125 12:44:30.041439 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7134d13f-ed64-4c99-b4e3-60cce051c14e-kube-api-access-qjtsh" (OuterVolumeSpecName: "kube-api-access-qjtsh") pod "7134d13f-ed64-4c99-b4e3-60cce051c14e" (UID: "7134d13f-ed64-4c99-b4e3-60cce051c14e"). InnerVolumeSpecName "kube-api-access-qjtsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:44:30 crc kubenswrapper[4706]: I1125 12:44:30.056474 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7134d13f-ed64-4c99-b4e3-60cce051c14e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7134d13f-ed64-4c99-b4e3-60cce051c14e" (UID: "7134d13f-ed64-4c99-b4e3-60cce051c14e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:44:30 crc kubenswrapper[4706]: I1125 12:44:30.136534 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjtsh\" (UniqueName: \"kubernetes.io/projected/7134d13f-ed64-4c99-b4e3-60cce051c14e-kube-api-access-qjtsh\") on node \"crc\" DevicePath \"\"" Nov 25 12:44:30 crc kubenswrapper[4706]: I1125 12:44:30.136830 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7134d13f-ed64-4c99-b4e3-60cce051c14e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:44:30 crc kubenswrapper[4706]: I1125 12:44:30.456561 4706 generic.go:334] "Generic (PLEG): container finished" podID="7134d13f-ed64-4c99-b4e3-60cce051c14e" containerID="f472c6d13a805bcfc4d5455ad5c53043544f3160bb1f7a0906117ac038c0efcc" exitCode=0 Nov 25 12:44:30 crc kubenswrapper[4706]: I1125 12:44:30.456617 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g2cng" Nov 25 12:44:30 crc kubenswrapper[4706]: I1125 12:44:30.456651 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g2cng" event={"ID":"7134d13f-ed64-4c99-b4e3-60cce051c14e","Type":"ContainerDied","Data":"f472c6d13a805bcfc4d5455ad5c53043544f3160bb1f7a0906117ac038c0efcc"} Nov 25 12:44:30 crc kubenswrapper[4706]: I1125 12:44:30.456703 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g2cng" event={"ID":"7134d13f-ed64-4c99-b4e3-60cce051c14e","Type":"ContainerDied","Data":"ceba066a64612d4730af8da3df9108ad5af7ba8be0c9f2e79cbd7bd086f66a0e"} Nov 25 12:44:30 crc kubenswrapper[4706]: I1125 12:44:30.456723 4706 scope.go:117] "RemoveContainer" containerID="f472c6d13a805bcfc4d5455ad5c53043544f3160bb1f7a0906117ac038c0efcc" Nov 25 12:44:30 crc kubenswrapper[4706]: I1125 12:44:30.478942 4706 scope.go:117] "RemoveContainer" containerID="623ba7bde120849a043e1c105bb4174e0ca0add4e0798caa9a33cea3e1ef0514" Nov 25 12:44:30 crc kubenswrapper[4706]: I1125 12:44:30.497584 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g2cng"] Nov 25 12:44:30 crc kubenswrapper[4706]: I1125 12:44:30.507048 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g2cng"] Nov 25 12:44:30 crc kubenswrapper[4706]: I1125 12:44:30.540211 4706 scope.go:117] "RemoveContainer" containerID="045d2d8f1367abd49bbf3a635ac32bfb8d9acc390c8e7d012e66d72045a59268" Nov 25 12:44:30 crc kubenswrapper[4706]: I1125 12:44:30.559692 4706 scope.go:117] "RemoveContainer" containerID="f472c6d13a805bcfc4d5455ad5c53043544f3160bb1f7a0906117ac038c0efcc" Nov 25 12:44:30 crc kubenswrapper[4706]: E1125 12:44:30.560197 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f472c6d13a805bcfc4d5455ad5c53043544f3160bb1f7a0906117ac038c0efcc\": container with ID starting with f472c6d13a805bcfc4d5455ad5c53043544f3160bb1f7a0906117ac038c0efcc not found: ID does not exist" containerID="f472c6d13a805bcfc4d5455ad5c53043544f3160bb1f7a0906117ac038c0efcc" Nov 25 12:44:30 crc kubenswrapper[4706]: I1125 12:44:30.560246 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f472c6d13a805bcfc4d5455ad5c53043544f3160bb1f7a0906117ac038c0efcc"} err="failed to get container status \"f472c6d13a805bcfc4d5455ad5c53043544f3160bb1f7a0906117ac038c0efcc\": rpc error: code = NotFound desc = could not find container \"f472c6d13a805bcfc4d5455ad5c53043544f3160bb1f7a0906117ac038c0efcc\": container with ID starting with f472c6d13a805bcfc4d5455ad5c53043544f3160bb1f7a0906117ac038c0efcc not found: ID does not exist" Nov 25 12:44:30 crc kubenswrapper[4706]: I1125 12:44:30.560277 4706 scope.go:117] "RemoveContainer" containerID="623ba7bde120849a043e1c105bb4174e0ca0add4e0798caa9a33cea3e1ef0514" Nov 25 12:44:30 crc kubenswrapper[4706]: E1125 12:44:30.560849 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"623ba7bde120849a043e1c105bb4174e0ca0add4e0798caa9a33cea3e1ef0514\": container with ID starting with 623ba7bde120849a043e1c105bb4174e0ca0add4e0798caa9a33cea3e1ef0514 not found: ID does not exist" containerID="623ba7bde120849a043e1c105bb4174e0ca0add4e0798caa9a33cea3e1ef0514" Nov 25 12:44:30 crc kubenswrapper[4706]: I1125 12:44:30.560991 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"623ba7bde120849a043e1c105bb4174e0ca0add4e0798caa9a33cea3e1ef0514"} err="failed to get container status \"623ba7bde120849a043e1c105bb4174e0ca0add4e0798caa9a33cea3e1ef0514\": rpc error: code = NotFound desc = could not find container \"623ba7bde120849a043e1c105bb4174e0ca0add4e0798caa9a33cea3e1ef0514\": container with ID starting with 623ba7bde120849a043e1c105bb4174e0ca0add4e0798caa9a33cea3e1ef0514 not found: ID does not exist" Nov 25 12:44:30 crc kubenswrapper[4706]: I1125 12:44:30.561148 4706 scope.go:117] "RemoveContainer" containerID="045d2d8f1367abd49bbf3a635ac32bfb8d9acc390c8e7d012e66d72045a59268" Nov 25 12:44:30 crc kubenswrapper[4706]: E1125 12:44:30.561824 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"045d2d8f1367abd49bbf3a635ac32bfb8d9acc390c8e7d012e66d72045a59268\": container with ID starting with 045d2d8f1367abd49bbf3a635ac32bfb8d9acc390c8e7d012e66d72045a59268 not found: ID does not exist" containerID="045d2d8f1367abd49bbf3a635ac32bfb8d9acc390c8e7d012e66d72045a59268" Nov 25 12:44:30 crc kubenswrapper[4706]: I1125 12:44:30.561878 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"045d2d8f1367abd49bbf3a635ac32bfb8d9acc390c8e7d012e66d72045a59268"} err="failed to get container status \"045d2d8f1367abd49bbf3a635ac32bfb8d9acc390c8e7d012e66d72045a59268\": rpc error: code = NotFound desc = could not find container \"045d2d8f1367abd49bbf3a635ac32bfb8d9acc390c8e7d012e66d72045a59268\": container with ID starting with 045d2d8f1367abd49bbf3a635ac32bfb8d9acc390c8e7d012e66d72045a59268 not found: ID does not exist" Nov 25 12:44:31 crc kubenswrapper[4706]: I1125 12:44:31.935100 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7134d13f-ed64-4c99-b4e3-60cce051c14e" path="/var/lib/kubelet/pods/7134d13f-ed64-4c99-b4e3-60cce051c14e/volumes" Nov 25 12:45:00 crc kubenswrapper[4706]: I1125 12:45:00.143806 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401245-898zg"] Nov 25 12:45:00 crc kubenswrapper[4706]: E1125 12:45:00.145209 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7134d13f-ed64-4c99-b4e3-60cce051c14e" containerName="extract-utilities" Nov 25 12:45:00 crc kubenswrapper[4706]: I1125 12:45:00.145225 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="7134d13f-ed64-4c99-b4e3-60cce051c14e" containerName="extract-utilities" Nov 25 12:45:00 crc kubenswrapper[4706]: E1125 12:45:00.145238 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7134d13f-ed64-4c99-b4e3-60cce051c14e" containerName="registry-server" Nov 25 12:45:00 crc kubenswrapper[4706]: I1125 12:45:00.145246 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="7134d13f-ed64-4c99-b4e3-60cce051c14e" containerName="registry-server" Nov 25 12:45:00 crc kubenswrapper[4706]: E1125 12:45:00.145270 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7134d13f-ed64-4c99-b4e3-60cce051c14e" containerName="extract-content" Nov 25 12:45:00 crc kubenswrapper[4706]: I1125 12:45:00.145276 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="7134d13f-ed64-4c99-b4e3-60cce051c14e" containerName="extract-content" Nov 25 12:45:00 crc kubenswrapper[4706]: I1125 12:45:00.145495 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="7134d13f-ed64-4c99-b4e3-60cce051c14e" containerName="registry-server" Nov 25 12:45:00 crc kubenswrapper[4706]: I1125 12:45:00.146174 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401245-898zg" Nov 25 12:45:00 crc kubenswrapper[4706]: I1125 12:45:00.149398 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 12:45:00 crc kubenswrapper[4706]: I1125 12:45:00.149627 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 12:45:00 crc kubenswrapper[4706]: I1125 12:45:00.165588 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401245-898zg"] Nov 25 12:45:00 crc kubenswrapper[4706]: I1125 12:45:00.256664 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/090ad946-5a2e-44fb-9610-b825821d50c8-config-volume\") pod \"collect-profiles-29401245-898zg\" (UID: \"090ad946-5a2e-44fb-9610-b825821d50c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401245-898zg" Nov 25 12:45:00 crc kubenswrapper[4706]: I1125 12:45:00.256796 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/090ad946-5a2e-44fb-9610-b825821d50c8-secret-volume\") pod \"collect-profiles-29401245-898zg\" (UID: \"090ad946-5a2e-44fb-9610-b825821d50c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401245-898zg" Nov 25 12:45:00 crc kubenswrapper[4706]: I1125 12:45:00.256939 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9k6q\" (UniqueName: \"kubernetes.io/projected/090ad946-5a2e-44fb-9610-b825821d50c8-kube-api-access-c9k6q\") pod \"collect-profiles-29401245-898zg\" (UID: \"090ad946-5a2e-44fb-9610-b825821d50c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401245-898zg" Nov 25 12:45:00 crc kubenswrapper[4706]: I1125 12:45:00.359002 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9k6q\" (UniqueName: \"kubernetes.io/projected/090ad946-5a2e-44fb-9610-b825821d50c8-kube-api-access-c9k6q\") pod \"collect-profiles-29401245-898zg\" (UID: \"090ad946-5a2e-44fb-9610-b825821d50c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401245-898zg" Nov 25 12:45:00 crc kubenswrapper[4706]: I1125 12:45:00.359142 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/090ad946-5a2e-44fb-9610-b825821d50c8-config-volume\") pod \"collect-profiles-29401245-898zg\" (UID: \"090ad946-5a2e-44fb-9610-b825821d50c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401245-898zg" Nov 25 12:45:00 crc kubenswrapper[4706]: I1125 12:45:00.359193 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/090ad946-5a2e-44fb-9610-b825821d50c8-secret-volume\") pod \"collect-profiles-29401245-898zg\" (UID: \"090ad946-5a2e-44fb-9610-b825821d50c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401245-898zg" Nov 25 12:45:00 crc kubenswrapper[4706]: I1125 12:45:00.360293 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/090ad946-5a2e-44fb-9610-b825821d50c8-config-volume\") pod \"collect-profiles-29401245-898zg\" (UID: \"090ad946-5a2e-44fb-9610-b825821d50c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401245-898zg" Nov 25 12:45:00 crc kubenswrapper[4706]: I1125 12:45:00.366153 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/090ad946-5a2e-44fb-9610-b825821d50c8-secret-volume\") pod \"collect-profiles-29401245-898zg\" (UID: \"090ad946-5a2e-44fb-9610-b825821d50c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401245-898zg" Nov 25 12:45:00 crc kubenswrapper[4706]: I1125 12:45:00.379106 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9k6q\" (UniqueName: \"kubernetes.io/projected/090ad946-5a2e-44fb-9610-b825821d50c8-kube-api-access-c9k6q\") pod \"collect-profiles-29401245-898zg\" (UID: \"090ad946-5a2e-44fb-9610-b825821d50c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401245-898zg" Nov 25 12:45:00 crc kubenswrapper[4706]: I1125 12:45:00.471893 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401245-898zg" Nov 25 12:45:00 crc kubenswrapper[4706]: I1125 12:45:00.934422 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401245-898zg"] Nov 25 12:45:01 crc kubenswrapper[4706]: I1125 12:45:01.771192 4706 generic.go:334] "Generic (PLEG): container finished" podID="090ad946-5a2e-44fb-9610-b825821d50c8" containerID="b9b3f29110e850f6c7e26f270970dfecb396125e74c73fbf146d08b78c1da641" exitCode=0 Nov 25 12:45:01 crc kubenswrapper[4706]: I1125 12:45:01.772519 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401245-898zg" event={"ID":"090ad946-5a2e-44fb-9610-b825821d50c8","Type":"ContainerDied","Data":"b9b3f29110e850f6c7e26f270970dfecb396125e74c73fbf146d08b78c1da641"} Nov 25 12:45:01 crc kubenswrapper[4706]: I1125 12:45:01.772818 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401245-898zg" event={"ID":"090ad946-5a2e-44fb-9610-b825821d50c8","Type":"ContainerStarted","Data":"553a7c6c5db96c3e4a6a2308e0188a3f9935cb46fdd3c90ab5a57cb59a0dce18"} Nov 25 12:45:03 crc kubenswrapper[4706]: I1125 12:45:03.154940 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401245-898zg" Nov 25 12:45:03 crc kubenswrapper[4706]: I1125 12:45:03.225638 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/090ad946-5a2e-44fb-9610-b825821d50c8-config-volume\") pod \"090ad946-5a2e-44fb-9610-b825821d50c8\" (UID: \"090ad946-5a2e-44fb-9610-b825821d50c8\") " Nov 25 12:45:03 crc kubenswrapper[4706]: I1125 12:45:03.225719 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9k6q\" (UniqueName: \"kubernetes.io/projected/090ad946-5a2e-44fb-9610-b825821d50c8-kube-api-access-c9k6q\") pod \"090ad946-5a2e-44fb-9610-b825821d50c8\" (UID: \"090ad946-5a2e-44fb-9610-b825821d50c8\") " Nov 25 12:45:03 crc kubenswrapper[4706]: I1125 12:45:03.226645 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/090ad946-5a2e-44fb-9610-b825821d50c8-config-volume" (OuterVolumeSpecName: "config-volume") pod "090ad946-5a2e-44fb-9610-b825821d50c8" (UID: "090ad946-5a2e-44fb-9610-b825821d50c8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:45:03 crc kubenswrapper[4706]: I1125 12:45:03.226747 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/090ad946-5a2e-44fb-9610-b825821d50c8-secret-volume\") pod \"090ad946-5a2e-44fb-9610-b825821d50c8\" (UID: \"090ad946-5a2e-44fb-9610-b825821d50c8\") " Nov 25 12:45:03 crc kubenswrapper[4706]: I1125 12:45:03.227230 4706 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/090ad946-5a2e-44fb-9610-b825821d50c8-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 12:45:03 crc kubenswrapper[4706]: I1125 12:45:03.230928 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/090ad946-5a2e-44fb-9610-b825821d50c8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "090ad946-5a2e-44fb-9610-b825821d50c8" (UID: "090ad946-5a2e-44fb-9610-b825821d50c8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:45:03 crc kubenswrapper[4706]: I1125 12:45:03.231515 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/090ad946-5a2e-44fb-9610-b825821d50c8-kube-api-access-c9k6q" (OuterVolumeSpecName: "kube-api-access-c9k6q") pod "090ad946-5a2e-44fb-9610-b825821d50c8" (UID: "090ad946-5a2e-44fb-9610-b825821d50c8"). InnerVolumeSpecName "kube-api-access-c9k6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:45:03 crc kubenswrapper[4706]: I1125 12:45:03.329399 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9k6q\" (UniqueName: \"kubernetes.io/projected/090ad946-5a2e-44fb-9610-b825821d50c8-kube-api-access-c9k6q\") on node \"crc\" DevicePath \"\"" Nov 25 12:45:03 crc kubenswrapper[4706]: I1125 12:45:03.329439 4706 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/090ad946-5a2e-44fb-9610-b825821d50c8-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 12:45:03 crc kubenswrapper[4706]: I1125 12:45:03.790563 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401245-898zg" Nov 25 12:45:03 crc kubenswrapper[4706]: I1125 12:45:03.790540 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401245-898zg" event={"ID":"090ad946-5a2e-44fb-9610-b825821d50c8","Type":"ContainerDied","Data":"553a7c6c5db96c3e4a6a2308e0188a3f9935cb46fdd3c90ab5a57cb59a0dce18"} Nov 25 12:45:03 crc kubenswrapper[4706]: I1125 12:45:03.790688 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="553a7c6c5db96c3e4a6a2308e0188a3f9935cb46fdd3c90ab5a57cb59a0dce18" Nov 25 12:45:04 crc kubenswrapper[4706]: I1125 12:45:04.251030 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401200-kx9b7"] Nov 25 12:45:04 crc kubenswrapper[4706]: I1125 12:45:04.262650 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401200-kx9b7"] Nov 25 12:45:05 crc kubenswrapper[4706]: I1125 12:45:05.933604 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a3962fd-978c-4b10-9dfc-19e83a738f9c" path="/var/lib/kubelet/pods/6a3962fd-978c-4b10-9dfc-19e83a738f9c/volumes" Nov 25 12:45:12 crc kubenswrapper[4706]: I1125 12:45:12.890813 4706 generic.go:334] "Generic (PLEG): container finished" podID="b5c81809-b0fb-48c6-b164-eef64ca8a7b1" containerID="a977f7e11abbfd54b6a17fddc36076506bd9c968961f6004264f3c30943cf7ab" exitCode=0 Nov 25 12:45:12 crc kubenswrapper[4706]: I1125 12:45:12.890920 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9k48/must-gather-rvs9t" event={"ID":"b5c81809-b0fb-48c6-b164-eef64ca8a7b1","Type":"ContainerDied","Data":"a977f7e11abbfd54b6a17fddc36076506bd9c968961f6004264f3c30943cf7ab"} Nov 25 12:45:12 crc kubenswrapper[4706]: I1125 12:45:12.891843 4706 scope.go:117] "RemoveContainer" containerID="a977f7e11abbfd54b6a17fddc36076506bd9c968961f6004264f3c30943cf7ab" Nov 25 12:45:13 crc kubenswrapper[4706]: I1125 12:45:13.824322 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-z9k48_must-gather-rvs9t_b5c81809-b0fb-48c6-b164-eef64ca8a7b1/gather/0.log" Nov 25 12:45:21 crc kubenswrapper[4706]: I1125 12:45:21.628363 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-z9k48/must-gather-rvs9t"] Nov 25 12:45:21 crc kubenswrapper[4706]: I1125 12:45:21.629254 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-z9k48/must-gather-rvs9t" podUID="b5c81809-b0fb-48c6-b164-eef64ca8a7b1" containerName="copy" containerID="cri-o://3df742aae4e36caeb7bde5876e3042c1fe842013760b2ebba2416c6122fa6096" gracePeriod=2 Nov 25 12:45:21 crc kubenswrapper[4706]: I1125 12:45:21.650709 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-z9k48/must-gather-rvs9t"] Nov 25 12:45:21 crc kubenswrapper[4706]: I1125 12:45:21.991185 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-z9k48_must-gather-rvs9t_b5c81809-b0fb-48c6-b164-eef64ca8a7b1/copy/0.log" Nov 25 12:45:21 crc kubenswrapper[4706]: I1125 12:45:21.991855 4706 generic.go:334] "Generic (PLEG): container finished" podID="b5c81809-b0fb-48c6-b164-eef64ca8a7b1" containerID="3df742aae4e36caeb7bde5876e3042c1fe842013760b2ebba2416c6122fa6096" exitCode=143 Nov 25 12:45:21 crc kubenswrapper[4706]: I1125 12:45:21.991920 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2e8dab122a316bc6432345628e5dfd074a47decb76df9e6f27eb5624cf80ffb" Nov 25 12:45:22 crc kubenswrapper[4706]: I1125 12:45:22.060025 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-z9k48_must-gather-rvs9t_b5c81809-b0fb-48c6-b164-eef64ca8a7b1/copy/0.log" Nov 25 12:45:22 crc kubenswrapper[4706]: I1125 12:45:22.060569 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9k48/must-gather-rvs9t" Nov 25 12:45:22 crc kubenswrapper[4706]: I1125 12:45:22.083713 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgqxc\" (UniqueName: \"kubernetes.io/projected/b5c81809-b0fb-48c6-b164-eef64ca8a7b1-kube-api-access-fgqxc\") pod \"b5c81809-b0fb-48c6-b164-eef64ca8a7b1\" (UID: \"b5c81809-b0fb-48c6-b164-eef64ca8a7b1\") " Nov 25 12:45:22 crc kubenswrapper[4706]: I1125 12:45:22.083849 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b5c81809-b0fb-48c6-b164-eef64ca8a7b1-must-gather-output\") pod \"b5c81809-b0fb-48c6-b164-eef64ca8a7b1\" (UID: \"b5c81809-b0fb-48c6-b164-eef64ca8a7b1\") " Nov 25 12:45:22 crc kubenswrapper[4706]: I1125 12:45:22.092930 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5c81809-b0fb-48c6-b164-eef64ca8a7b1-kube-api-access-fgqxc" (OuterVolumeSpecName: "kube-api-access-fgqxc") pod "b5c81809-b0fb-48c6-b164-eef64ca8a7b1" (UID: "b5c81809-b0fb-48c6-b164-eef64ca8a7b1"). InnerVolumeSpecName "kube-api-access-fgqxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:45:22 crc kubenswrapper[4706]: I1125 12:45:22.185852 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgqxc\" (UniqueName: \"kubernetes.io/projected/b5c81809-b0fb-48c6-b164-eef64ca8a7b1-kube-api-access-fgqxc\") on node \"crc\" DevicePath \"\"" Nov 25 12:45:22 crc kubenswrapper[4706]: I1125 12:45:22.256647 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5c81809-b0fb-48c6-b164-eef64ca8a7b1-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "b5c81809-b0fb-48c6-b164-eef64ca8a7b1" (UID: "b5c81809-b0fb-48c6-b164-eef64ca8a7b1"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:45:22 crc kubenswrapper[4706]: I1125 12:45:22.287653 4706 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b5c81809-b0fb-48c6-b164-eef64ca8a7b1-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 25 12:45:23 crc kubenswrapper[4706]: I1125 12:45:23.000955 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9k48/must-gather-rvs9t" Nov 25 12:45:23 crc kubenswrapper[4706]: I1125 12:45:23.935037 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5c81809-b0fb-48c6-b164-eef64ca8a7b1" path="/var/lib/kubelet/pods/b5c81809-b0fb-48c6-b164-eef64ca8a7b1/volumes" Nov 25 12:45:25 crc kubenswrapper[4706]: I1125 12:45:25.047218 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zsdmv"] Nov 25 12:45:25 crc kubenswrapper[4706]: E1125 12:45:25.048580 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="090ad946-5a2e-44fb-9610-b825821d50c8" containerName="collect-profiles" Nov 25 12:45:25 crc kubenswrapper[4706]: I1125 12:45:25.048664 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="090ad946-5a2e-44fb-9610-b825821d50c8" containerName="collect-profiles" Nov 25 12:45:25 crc kubenswrapper[4706]: E1125 12:45:25.048761 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5c81809-b0fb-48c6-b164-eef64ca8a7b1" containerName="copy" Nov 25 12:45:25 crc kubenswrapper[4706]: I1125 12:45:25.048810 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5c81809-b0fb-48c6-b164-eef64ca8a7b1" containerName="copy" Nov 25 12:45:25 crc kubenswrapper[4706]: E1125 12:45:25.048873 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5c81809-b0fb-48c6-b164-eef64ca8a7b1" containerName="gather" Nov 25 12:45:25 crc kubenswrapper[4706]: I1125 12:45:25.049019 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5c81809-b0fb-48c6-b164-eef64ca8a7b1" containerName="gather" Nov 25 12:45:25 crc kubenswrapper[4706]: I1125 12:45:25.049274 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="090ad946-5a2e-44fb-9610-b825821d50c8" containerName="collect-profiles" Nov 25 12:45:25 crc kubenswrapper[4706]: I1125 12:45:25.049357 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5c81809-b0fb-48c6-b164-eef64ca8a7b1" containerName="copy" Nov 25 12:45:25 crc kubenswrapper[4706]: I1125 12:45:25.049425 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5c81809-b0fb-48c6-b164-eef64ca8a7b1" containerName="gather" Nov 25 12:45:25 crc kubenswrapper[4706]: I1125 12:45:25.050769 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zsdmv" Nov 25 12:45:25 crc kubenswrapper[4706]: I1125 12:45:25.062711 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zsdmv"] Nov 25 12:45:25 crc kubenswrapper[4706]: I1125 12:45:25.140444 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e-utilities\") pod \"community-operators-zsdmv\" (UID: \"dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e\") " pod="openshift-marketplace/community-operators-zsdmv" Nov 25 12:45:25 crc kubenswrapper[4706]: I1125 12:45:25.140592 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e-catalog-content\") pod \"community-operators-zsdmv\" (UID: \"dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e\") " pod="openshift-marketplace/community-operators-zsdmv" Nov 25 12:45:25 crc kubenswrapper[4706]: I1125 12:45:25.140656 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb9qh\" (UniqueName: \"kubernetes.io/projected/dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e-kube-api-access-hb9qh\") pod \"community-operators-zsdmv\" (UID: \"dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e\") " pod="openshift-marketplace/community-operators-zsdmv" Nov 25 12:45:25 crc kubenswrapper[4706]: I1125 12:45:25.242759 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e-utilities\") pod \"community-operators-zsdmv\" (UID: \"dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e\") " pod="openshift-marketplace/community-operators-zsdmv" Nov 25 12:45:25 crc kubenswrapper[4706]: I1125 12:45:25.242870 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e-catalog-content\") pod \"community-operators-zsdmv\" (UID: \"dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e\") " pod="openshift-marketplace/community-operators-zsdmv" Nov 25 12:45:25 crc kubenswrapper[4706]: I1125 12:45:25.242924 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb9qh\" (UniqueName: \"kubernetes.io/projected/dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e-kube-api-access-hb9qh\") pod \"community-operators-zsdmv\" (UID: \"dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e\") " pod="openshift-marketplace/community-operators-zsdmv" Nov 25 12:45:25 crc kubenswrapper[4706]: I1125 12:45:25.243864 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e-utilities\") pod \"community-operators-zsdmv\" (UID: \"dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e\") " pod="openshift-marketplace/community-operators-zsdmv" Nov 25 12:45:25 crc kubenswrapper[4706]: I1125 12:45:25.257873 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e-catalog-content\") pod \"community-operators-zsdmv\" (UID: \"dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e\") " pod="openshift-marketplace/community-operators-zsdmv" Nov 25 12:45:25 crc kubenswrapper[4706]: I1125 12:45:25.282422 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb9qh\" (UniqueName: \"kubernetes.io/projected/dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e-kube-api-access-hb9qh\") pod \"community-operators-zsdmv\" (UID: \"dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e\") " pod="openshift-marketplace/community-operators-zsdmv" Nov 25 12:45:25 crc kubenswrapper[4706]: I1125 12:45:25.368373 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zsdmv" Nov 25 12:45:25 crc kubenswrapper[4706]: I1125 12:45:25.888355 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zsdmv"] Nov 25 12:45:26 crc kubenswrapper[4706]: I1125 12:45:26.028229 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zsdmv" event={"ID":"dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e","Type":"ContainerStarted","Data":"3a2d0bf164f28675198f7dab6e99d440df3d8340d688589c75334a74b056aa51"} Nov 25 12:45:27 crc kubenswrapper[4706]: I1125 12:45:27.037851 4706 generic.go:334] "Generic (PLEG): container finished" podID="dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e" containerID="d8a1daff8906e9286a1ce6cf03d0d43ab051b8e87312437bbb907b5921d12f0a" exitCode=0 Nov 25 12:45:27 crc kubenswrapper[4706]: I1125 12:45:27.037987 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zsdmv" event={"ID":"dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e","Type":"ContainerDied","Data":"d8a1daff8906e9286a1ce6cf03d0d43ab051b8e87312437bbb907b5921d12f0a"} Nov 25 12:45:27 crc kubenswrapper[4706]: I1125 12:45:27.039818 4706 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 12:45:29 crc kubenswrapper[4706]: I1125 12:45:29.056417 4706 generic.go:334] "Generic (PLEG): container finished" podID="dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e" containerID="d8aba134eb93bd9a14cdcd575f0d6220a68b03fa2808f7aec91394d1800db1ed" exitCode=0 Nov 25 12:45:29 crc kubenswrapper[4706]: I1125 12:45:29.056463 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zsdmv" event={"ID":"dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e","Type":"ContainerDied","Data":"d8aba134eb93bd9a14cdcd575f0d6220a68b03fa2808f7aec91394d1800db1ed"} Nov 25 12:45:30 crc kubenswrapper[4706]: I1125 12:45:30.067599 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zsdmv" event={"ID":"dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e","Type":"ContainerStarted","Data":"e065448614f3100f4b3b03d6671975743664696d4b7b6fa6091ef17f084f6a54"} Nov 25 12:45:30 crc kubenswrapper[4706]: I1125 12:45:30.096441 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zsdmv" podStartSLOduration=2.646746731 podStartE2EDuration="5.096414562s" podCreationTimestamp="2025-11-25 12:45:25 +0000 UTC" firstStartedPulling="2025-11-25 12:45:27.039544006 +0000 UTC m=+4135.954101397" lastFinishedPulling="2025-11-25 12:45:29.489211847 +0000 UTC m=+4138.403769228" observedRunningTime="2025-11-25 12:45:30.089020226 +0000 UTC m=+4139.003577607" watchObservedRunningTime="2025-11-25 12:45:30.096414562 +0000 UTC m=+4139.010971943" Nov 25 12:45:31 crc kubenswrapper[4706]: I1125 12:45:31.125103 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:45:31 crc kubenswrapper[4706]: I1125 12:45:31.125443 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:45:35 crc kubenswrapper[4706]: I1125 12:45:35.368822 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zsdmv" Nov 25 12:45:35 crc kubenswrapper[4706]: I1125 12:45:35.369559 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zsdmv" Nov 25 12:45:35 crc kubenswrapper[4706]: I1125 12:45:35.413336 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zsdmv" Nov 25 12:45:36 crc kubenswrapper[4706]: I1125 12:45:36.167714 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zsdmv" Nov 25 12:45:36 crc kubenswrapper[4706]: I1125 12:45:36.215144 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zsdmv"] Nov 25 12:45:38 crc kubenswrapper[4706]: I1125 12:45:38.112171 4706 scope.go:117] "RemoveContainer" containerID="1531a26ae612faff3acdfdcf02e009f0b100b31157cd5ebab990de2005370a84" Nov 25 12:45:38 crc kubenswrapper[4706]: I1125 12:45:38.141107 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zsdmv" podUID="dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e" containerName="registry-server" containerID="cri-o://e065448614f3100f4b3b03d6671975743664696d4b7b6fa6091ef17f084f6a54" gracePeriod=2 Nov 25 12:45:38 crc kubenswrapper[4706]: I1125 12:45:38.655517 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zsdmv" Nov 25 12:45:38 crc kubenswrapper[4706]: I1125 12:45:38.799036 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb9qh\" (UniqueName: \"kubernetes.io/projected/dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e-kube-api-access-hb9qh\") pod \"dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e\" (UID: \"dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e\") " Nov 25 12:45:38 crc kubenswrapper[4706]: I1125 12:45:38.799191 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e-catalog-content\") pod \"dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e\" (UID: \"dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e\") " Nov 25 12:45:38 crc kubenswrapper[4706]: I1125 12:45:38.799288 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e-utilities\") pod \"dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e\" (UID: \"dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e\") " Nov 25 12:45:38 crc kubenswrapper[4706]: I1125 12:45:38.800291 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e-utilities" (OuterVolumeSpecName: "utilities") pod "dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e" (UID: "dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:45:38 crc kubenswrapper[4706]: I1125 12:45:38.806637 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e-kube-api-access-hb9qh" (OuterVolumeSpecName: "kube-api-access-hb9qh") pod "dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e" (UID: "dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e"). InnerVolumeSpecName "kube-api-access-hb9qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:45:38 crc kubenswrapper[4706]: I1125 12:45:38.860623 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e" (UID: "dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:45:38 crc kubenswrapper[4706]: I1125 12:45:38.902001 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:45:38 crc kubenswrapper[4706]: I1125 12:45:38.902052 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:45:38 crc kubenswrapper[4706]: I1125 12:45:38.902069 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb9qh\" (UniqueName: \"kubernetes.io/projected/dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e-kube-api-access-hb9qh\") on node \"crc\" DevicePath \"\"" Nov 25 12:45:39 crc kubenswrapper[4706]: I1125 12:45:39.158578 4706 generic.go:334] "Generic (PLEG): container finished" podID="dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e" containerID="e065448614f3100f4b3b03d6671975743664696d4b7b6fa6091ef17f084f6a54" exitCode=0 Nov 25 12:45:39 crc kubenswrapper[4706]: I1125 12:45:39.158653 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zsdmv" event={"ID":"dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e","Type":"ContainerDied","Data":"e065448614f3100f4b3b03d6671975743664696d4b7b6fa6091ef17f084f6a54"} Nov 25 12:45:39 crc kubenswrapper[4706]: I1125 12:45:39.158687 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zsdmv" event={"ID":"dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e","Type":"ContainerDied","Data":"3a2d0bf164f28675198f7dab6e99d440df3d8340d688589c75334a74b056aa51"} Nov 25 12:45:39 crc kubenswrapper[4706]: I1125 12:45:39.158712 4706 scope.go:117] "RemoveContainer" containerID="e065448614f3100f4b3b03d6671975743664696d4b7b6fa6091ef17f084f6a54" Nov 25 12:45:39 crc kubenswrapper[4706]: I1125 12:45:39.159121 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zsdmv" Nov 25 12:45:39 crc kubenswrapper[4706]: I1125 12:45:39.180402 4706 scope.go:117] "RemoveContainer" containerID="d8aba134eb93bd9a14cdcd575f0d6220a68b03fa2808f7aec91394d1800db1ed" Nov 25 12:45:39 crc kubenswrapper[4706]: I1125 12:45:39.201906 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zsdmv"] Nov 25 12:45:39 crc kubenswrapper[4706]: I1125 12:45:39.211243 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zsdmv"] Nov 25 12:45:39 crc kubenswrapper[4706]: I1125 12:45:39.229795 4706 scope.go:117] "RemoveContainer" containerID="d8a1daff8906e9286a1ce6cf03d0d43ab051b8e87312437bbb907b5921d12f0a" Nov 25 12:45:39 crc kubenswrapper[4706]: I1125 12:45:39.260244 4706 scope.go:117] "RemoveContainer" containerID="e065448614f3100f4b3b03d6671975743664696d4b7b6fa6091ef17f084f6a54" Nov 25 12:45:39 crc kubenswrapper[4706]: E1125 12:45:39.261010 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e065448614f3100f4b3b03d6671975743664696d4b7b6fa6091ef17f084f6a54\": container with ID starting with e065448614f3100f4b3b03d6671975743664696d4b7b6fa6091ef17f084f6a54 not found: ID does not exist" containerID="e065448614f3100f4b3b03d6671975743664696d4b7b6fa6091ef17f084f6a54" Nov 25 12:45:39 crc kubenswrapper[4706]: I1125 12:45:39.261083 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e065448614f3100f4b3b03d6671975743664696d4b7b6fa6091ef17f084f6a54"} err="failed to get container status \"e065448614f3100f4b3b03d6671975743664696d4b7b6fa6091ef17f084f6a54\": rpc error: code = NotFound desc = could not find container \"e065448614f3100f4b3b03d6671975743664696d4b7b6fa6091ef17f084f6a54\": container with ID starting with e065448614f3100f4b3b03d6671975743664696d4b7b6fa6091ef17f084f6a54 not found: ID does not exist" Nov 25 12:45:39 crc kubenswrapper[4706]: I1125 12:45:39.261112 4706 scope.go:117] "RemoveContainer" containerID="d8aba134eb93bd9a14cdcd575f0d6220a68b03fa2808f7aec91394d1800db1ed" Nov 25 12:45:39 crc kubenswrapper[4706]: E1125 12:45:39.261668 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8aba134eb93bd9a14cdcd575f0d6220a68b03fa2808f7aec91394d1800db1ed\": container with ID starting with d8aba134eb93bd9a14cdcd575f0d6220a68b03fa2808f7aec91394d1800db1ed not found: ID does not exist" containerID="d8aba134eb93bd9a14cdcd575f0d6220a68b03fa2808f7aec91394d1800db1ed" Nov 25 12:45:39 crc kubenswrapper[4706]: I1125 12:45:39.261732 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8aba134eb93bd9a14cdcd575f0d6220a68b03fa2808f7aec91394d1800db1ed"} err="failed to get container status \"d8aba134eb93bd9a14cdcd575f0d6220a68b03fa2808f7aec91394d1800db1ed\": rpc error: code = NotFound desc = could not find container \"d8aba134eb93bd9a14cdcd575f0d6220a68b03fa2808f7aec91394d1800db1ed\": container with ID starting with d8aba134eb93bd9a14cdcd575f0d6220a68b03fa2808f7aec91394d1800db1ed not found: ID does not exist" Nov 25 12:45:39 crc kubenswrapper[4706]: I1125 12:45:39.261770 4706 scope.go:117] "RemoveContainer" containerID="d8a1daff8906e9286a1ce6cf03d0d43ab051b8e87312437bbb907b5921d12f0a" Nov 25 12:45:39 crc kubenswrapper[4706]: E1125 12:45:39.262226 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8a1daff8906e9286a1ce6cf03d0d43ab051b8e87312437bbb907b5921d12f0a\": container with ID starting with d8a1daff8906e9286a1ce6cf03d0d43ab051b8e87312437bbb907b5921d12f0a not found: ID does not exist" containerID="d8a1daff8906e9286a1ce6cf03d0d43ab051b8e87312437bbb907b5921d12f0a" Nov 25 12:45:39 crc kubenswrapper[4706]: I1125 12:45:39.262258 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8a1daff8906e9286a1ce6cf03d0d43ab051b8e87312437bbb907b5921d12f0a"} err="failed to get container status \"d8a1daff8906e9286a1ce6cf03d0d43ab051b8e87312437bbb907b5921d12f0a\": rpc error: code = NotFound desc = could not find container \"d8a1daff8906e9286a1ce6cf03d0d43ab051b8e87312437bbb907b5921d12f0a\": container with ID starting with d8a1daff8906e9286a1ce6cf03d0d43ab051b8e87312437bbb907b5921d12f0a not found: ID does not exist" Nov 25 12:45:39 crc kubenswrapper[4706]: I1125 12:45:39.934920 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e" path="/var/lib/kubelet/pods/dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e/volumes" Nov 25 12:46:01 crc kubenswrapper[4706]: I1125 12:46:01.125696 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:46:01 crc kubenswrapper[4706]: I1125 12:46:01.126293 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:46:31 crc kubenswrapper[4706]: I1125 12:46:31.125228 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:46:31 crc kubenswrapper[4706]: I1125 12:46:31.125813 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:46:31 crc kubenswrapper[4706]: I1125 12:46:31.125872 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" Nov 25 12:46:31 crc kubenswrapper[4706]: I1125 12:46:31.126776 4706 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ec124d7ca75771b4c4c8fe512ca2efc5a14229d016e5175e85c0e297e332d27e"} pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 12:46:31 crc kubenswrapper[4706]: I1125 12:46:31.126840 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" containerID="cri-o://ec124d7ca75771b4c4c8fe512ca2efc5a14229d016e5175e85c0e297e332d27e" gracePeriod=600 Nov 25 12:46:31 crc kubenswrapper[4706]: I1125 12:46:31.631235 4706 generic.go:334] "Generic (PLEG): container finished" podID="0930887a-320c-4506-8c9c-f94d6d64516a" containerID="ec124d7ca75771b4c4c8fe512ca2efc5a14229d016e5175e85c0e297e332d27e" exitCode=0 Nov 25 12:46:31 crc kubenswrapper[4706]: I1125 12:46:31.631324 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" event={"ID":"0930887a-320c-4506-8c9c-f94d6d64516a","Type":"ContainerDied","Data":"ec124d7ca75771b4c4c8fe512ca2efc5a14229d016e5175e85c0e297e332d27e"} Nov 25 12:46:31 crc kubenswrapper[4706]: I1125 12:46:31.631685 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" event={"ID":"0930887a-320c-4506-8c9c-f94d6d64516a","Type":"ContainerStarted","Data":"26d31244857a0be0aea5023b5f648b4e573312d8ff6419d5d6b048bd70f84083"} Nov 25 12:46:31 crc kubenswrapper[4706]: I1125 12:46:31.631716 4706 scope.go:117] "RemoveContainer" containerID="f7d4f2bc57b2d7499bb910a36c7d647ec55fac45e9295616e11685165a93deff" Nov 25 12:46:38 crc kubenswrapper[4706]: I1125 12:46:38.254152 4706 scope.go:117] "RemoveContainer" containerID="a977f7e11abbfd54b6a17fddc36076506bd9c968961f6004264f3c30943cf7ab" Nov 25 12:46:38 crc kubenswrapper[4706]: I1125 12:46:38.300567 4706 scope.go:117] "RemoveContainer" containerID="3df742aae4e36caeb7bde5876e3042c1fe842013760b2ebba2416c6122fa6096" Nov 25 12:47:54 crc kubenswrapper[4706]: I1125 12:47:54.656499 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gdrsm/must-gather-6mkxl"] Nov 25 12:47:54 crc kubenswrapper[4706]: E1125 12:47:54.657497 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e" containerName="extract-utilities" Nov 25 12:47:54 crc kubenswrapper[4706]: I1125 12:47:54.657513 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e" containerName="extract-utilities" Nov 25 12:47:54 crc kubenswrapper[4706]: E1125 12:47:54.657521 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e" containerName="extract-content" Nov 25 12:47:54 crc kubenswrapper[4706]: I1125 12:47:54.657527 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e" containerName="extract-content" Nov 25 12:47:54 crc kubenswrapper[4706]: E1125 12:47:54.657545 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e" containerName="registry-server" Nov 25 12:47:54 crc kubenswrapper[4706]: I1125 12:47:54.657551 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e" containerName="registry-server" Nov 25 12:47:54 crc kubenswrapper[4706]: I1125 12:47:54.657800 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfe4bca7-9a2e-4fd2-a12d-b68550d86e5e" containerName="registry-server" Nov 25 12:47:54 crc kubenswrapper[4706]: I1125 12:47:54.658981 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gdrsm/must-gather-6mkxl" Nov 25 12:47:54 crc kubenswrapper[4706]: I1125 12:47:54.663429 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-gdrsm"/"kube-root-ca.crt" Nov 25 12:47:54 crc kubenswrapper[4706]: I1125 12:47:54.664585 4706 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-gdrsm"/"openshift-service-ca.crt" Nov 25 12:47:54 crc kubenswrapper[4706]: I1125 12:47:54.670822 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gdrsm/must-gather-6mkxl"] Nov 25 12:47:54 crc kubenswrapper[4706]: I1125 12:47:54.742264 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw2xc\" (UniqueName: \"kubernetes.io/projected/f12cb3ac-00df-48d8-8a57-ab012d97d481-kube-api-access-mw2xc\") pod \"must-gather-6mkxl\" (UID: \"f12cb3ac-00df-48d8-8a57-ab012d97d481\") " pod="openshift-must-gather-gdrsm/must-gather-6mkxl" Nov 25 12:47:54 crc kubenswrapper[4706]: I1125 12:47:54.742423 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f12cb3ac-00df-48d8-8a57-ab012d97d481-must-gather-output\") pod \"must-gather-6mkxl\" (UID: \"f12cb3ac-00df-48d8-8a57-ab012d97d481\") " pod="openshift-must-gather-gdrsm/must-gather-6mkxl" Nov 25 12:47:54 crc kubenswrapper[4706]: I1125 12:47:54.844163 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw2xc\" (UniqueName: \"kubernetes.io/projected/f12cb3ac-00df-48d8-8a57-ab012d97d481-kube-api-access-mw2xc\") pod \"must-gather-6mkxl\" (UID: \"f12cb3ac-00df-48d8-8a57-ab012d97d481\") " pod="openshift-must-gather-gdrsm/must-gather-6mkxl" Nov 25 12:47:54 crc kubenswrapper[4706]: I1125 12:47:54.844229 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f12cb3ac-00df-48d8-8a57-ab012d97d481-must-gather-output\") pod \"must-gather-6mkxl\" (UID: \"f12cb3ac-00df-48d8-8a57-ab012d97d481\") " pod="openshift-must-gather-gdrsm/must-gather-6mkxl" Nov 25 12:47:54 crc kubenswrapper[4706]: I1125 12:47:54.844646 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f12cb3ac-00df-48d8-8a57-ab012d97d481-must-gather-output\") pod \"must-gather-6mkxl\" (UID: \"f12cb3ac-00df-48d8-8a57-ab012d97d481\") " pod="openshift-must-gather-gdrsm/must-gather-6mkxl" Nov 25 12:47:54 crc kubenswrapper[4706]: I1125 12:47:54.908493 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw2xc\" (UniqueName: \"kubernetes.io/projected/f12cb3ac-00df-48d8-8a57-ab012d97d481-kube-api-access-mw2xc\") pod \"must-gather-6mkxl\" (UID: \"f12cb3ac-00df-48d8-8a57-ab012d97d481\") " pod="openshift-must-gather-gdrsm/must-gather-6mkxl" Nov 25 12:47:54 crc kubenswrapper[4706]: I1125 12:47:54.977623 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gdrsm/must-gather-6mkxl" Nov 25 12:47:55 crc kubenswrapper[4706]: I1125 12:47:55.445107 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gdrsm/must-gather-6mkxl"] Nov 25 12:47:56 crc kubenswrapper[4706]: I1125 12:47:56.426868 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gdrsm/must-gather-6mkxl" event={"ID":"f12cb3ac-00df-48d8-8a57-ab012d97d481","Type":"ContainerStarted","Data":"cd38d7f0eb91fb224087640fc1b4c1c7fff4d9348794934fec3744c855648b1d"} Nov 25 12:47:56 crc kubenswrapper[4706]: I1125 12:47:56.427455 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gdrsm/must-gather-6mkxl" event={"ID":"f12cb3ac-00df-48d8-8a57-ab012d97d481","Type":"ContainerStarted","Data":"7c6f480730951901446414868a5e6fbce5374232af68b4256a939265e5a5377c"} Nov 25 12:47:56 crc kubenswrapper[4706]: I1125 12:47:56.427473 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gdrsm/must-gather-6mkxl" event={"ID":"f12cb3ac-00df-48d8-8a57-ab012d97d481","Type":"ContainerStarted","Data":"ac139b016e7d150db5a8c1c487c879d255a2bd58e742d90856cbfa38fadc61f3"} Nov 25 12:47:56 crc kubenswrapper[4706]: I1125 12:47:56.451235 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-gdrsm/must-gather-6mkxl" podStartSLOduration=2.451217511 podStartE2EDuration="2.451217511s" podCreationTimestamp="2025-11-25 12:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:47:56.44642188 +0000 UTC m=+4285.360979271" watchObservedRunningTime="2025-11-25 12:47:56.451217511 +0000 UTC m=+4285.365774892" Nov 25 12:47:59 crc kubenswrapper[4706]: I1125 12:47:59.359881 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gdrsm/crc-debug-cct72"] Nov 25 12:47:59 crc kubenswrapper[4706]: I1125 12:47:59.361920 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gdrsm/crc-debug-cct72" Nov 25 12:47:59 crc kubenswrapper[4706]: I1125 12:47:59.366989 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-gdrsm"/"default-dockercfg-rnk7d" Nov 25 12:47:59 crc kubenswrapper[4706]: I1125 12:47:59.486516 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/962651b2-14e9-475d-88b4-2a949a2523cb-host\") pod \"crc-debug-cct72\" (UID: \"962651b2-14e9-475d-88b4-2a949a2523cb\") " pod="openshift-must-gather-gdrsm/crc-debug-cct72" Nov 25 12:47:59 crc kubenswrapper[4706]: I1125 12:47:59.486623 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z22hp\" (UniqueName: \"kubernetes.io/projected/962651b2-14e9-475d-88b4-2a949a2523cb-kube-api-access-z22hp\") pod \"crc-debug-cct72\" (UID: \"962651b2-14e9-475d-88b4-2a949a2523cb\") " pod="openshift-must-gather-gdrsm/crc-debug-cct72" Nov 25 12:47:59 crc kubenswrapper[4706]: I1125 12:47:59.588663 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z22hp\" (UniqueName: \"kubernetes.io/projected/962651b2-14e9-475d-88b4-2a949a2523cb-kube-api-access-z22hp\") pod \"crc-debug-cct72\" (UID: \"962651b2-14e9-475d-88b4-2a949a2523cb\") " pod="openshift-must-gather-gdrsm/crc-debug-cct72" Nov 25 12:47:59 crc kubenswrapper[4706]: I1125 12:47:59.588816 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/962651b2-14e9-475d-88b4-2a949a2523cb-host\") pod \"crc-debug-cct72\" (UID: \"962651b2-14e9-475d-88b4-2a949a2523cb\") " pod="openshift-must-gather-gdrsm/crc-debug-cct72" Nov 25 12:47:59 crc kubenswrapper[4706]: I1125 12:47:59.588927 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/962651b2-14e9-475d-88b4-2a949a2523cb-host\") pod \"crc-debug-cct72\" (UID: \"962651b2-14e9-475d-88b4-2a949a2523cb\") " pod="openshift-must-gather-gdrsm/crc-debug-cct72" Nov 25 12:47:59 crc kubenswrapper[4706]: I1125 12:47:59.608829 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z22hp\" (UniqueName: \"kubernetes.io/projected/962651b2-14e9-475d-88b4-2a949a2523cb-kube-api-access-z22hp\") pod \"crc-debug-cct72\" (UID: \"962651b2-14e9-475d-88b4-2a949a2523cb\") " pod="openshift-must-gather-gdrsm/crc-debug-cct72" Nov 25 12:47:59 crc kubenswrapper[4706]: I1125 12:47:59.689211 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gdrsm/crc-debug-cct72" Nov 25 12:48:00 crc kubenswrapper[4706]: I1125 12:48:00.465535 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gdrsm/crc-debug-cct72" event={"ID":"962651b2-14e9-475d-88b4-2a949a2523cb","Type":"ContainerStarted","Data":"5e099d2ca034c736e522c65f7fd2981ea02baf10f16322960b6d60756eb95235"} Nov 25 12:48:00 crc kubenswrapper[4706]: I1125 12:48:00.466109 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gdrsm/crc-debug-cct72" event={"ID":"962651b2-14e9-475d-88b4-2a949a2523cb","Type":"ContainerStarted","Data":"12a0cf406732bdb0e5e30d7259aeafc1a9e6901cc8154127089b33087953d19d"} Nov 25 12:48:00 crc kubenswrapper[4706]: I1125 12:48:00.485563 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-gdrsm/crc-debug-cct72" podStartSLOduration=1.48554591 podStartE2EDuration="1.48554591s" podCreationTimestamp="2025-11-25 12:47:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:48:00.477574449 +0000 UTC m=+4289.392131830" watchObservedRunningTime="2025-11-25 12:48:00.48554591 +0000 UTC m=+4289.400103291" Nov 25 12:48:31 crc kubenswrapper[4706]: I1125 12:48:31.125055 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:48:31 crc kubenswrapper[4706]: I1125 12:48:31.126791 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:48:33 crc kubenswrapper[4706]: I1125 12:48:33.766403 4706 generic.go:334] "Generic (PLEG): container finished" podID="962651b2-14e9-475d-88b4-2a949a2523cb" containerID="5e099d2ca034c736e522c65f7fd2981ea02baf10f16322960b6d60756eb95235" exitCode=0 Nov 25 12:48:33 crc kubenswrapper[4706]: I1125 12:48:33.766509 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gdrsm/crc-debug-cct72" event={"ID":"962651b2-14e9-475d-88b4-2a949a2523cb","Type":"ContainerDied","Data":"5e099d2ca034c736e522c65f7fd2981ea02baf10f16322960b6d60756eb95235"} Nov 25 12:48:34 crc kubenswrapper[4706]: I1125 12:48:34.884291 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gdrsm/crc-debug-cct72" Nov 25 12:48:34 crc kubenswrapper[4706]: I1125 12:48:34.922660 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gdrsm/crc-debug-cct72"] Nov 25 12:48:34 crc kubenswrapper[4706]: I1125 12:48:34.926378 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/962651b2-14e9-475d-88b4-2a949a2523cb-host\") pod \"962651b2-14e9-475d-88b4-2a949a2523cb\" (UID: \"962651b2-14e9-475d-88b4-2a949a2523cb\") " Nov 25 12:48:34 crc kubenswrapper[4706]: I1125 12:48:34.926709 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z22hp\" (UniqueName: \"kubernetes.io/projected/962651b2-14e9-475d-88b4-2a949a2523cb-kube-api-access-z22hp\") pod \"962651b2-14e9-475d-88b4-2a949a2523cb\" (UID: \"962651b2-14e9-475d-88b4-2a949a2523cb\") " Nov 25 12:48:34 crc kubenswrapper[4706]: I1125 12:48:34.926498 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/962651b2-14e9-475d-88b4-2a949a2523cb-host" (OuterVolumeSpecName: "host") pod "962651b2-14e9-475d-88b4-2a949a2523cb" (UID: "962651b2-14e9-475d-88b4-2a949a2523cb"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:48:34 crc kubenswrapper[4706]: I1125 12:48:34.932456 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gdrsm/crc-debug-cct72"] Nov 25 12:48:34 crc kubenswrapper[4706]: I1125 12:48:34.933834 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/962651b2-14e9-475d-88b4-2a949a2523cb-kube-api-access-z22hp" (OuterVolumeSpecName: "kube-api-access-z22hp") pod "962651b2-14e9-475d-88b4-2a949a2523cb" (UID: "962651b2-14e9-475d-88b4-2a949a2523cb"). InnerVolumeSpecName "kube-api-access-z22hp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:48:35 crc kubenswrapper[4706]: I1125 12:48:35.029348 4706 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/962651b2-14e9-475d-88b4-2a949a2523cb-host\") on node \"crc\" DevicePath \"\"" Nov 25 12:48:35 crc kubenswrapper[4706]: I1125 12:48:35.029446 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z22hp\" (UniqueName: \"kubernetes.io/projected/962651b2-14e9-475d-88b4-2a949a2523cb-kube-api-access-z22hp\") on node \"crc\" DevicePath \"\"" Nov 25 12:48:35 crc kubenswrapper[4706]: I1125 12:48:35.784202 4706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12a0cf406732bdb0e5e30d7259aeafc1a9e6901cc8154127089b33087953d19d" Nov 25 12:48:35 crc kubenswrapper[4706]: I1125 12:48:35.784274 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gdrsm/crc-debug-cct72" Nov 25 12:48:35 crc kubenswrapper[4706]: I1125 12:48:35.934229 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="962651b2-14e9-475d-88b4-2a949a2523cb" path="/var/lib/kubelet/pods/962651b2-14e9-475d-88b4-2a949a2523cb/volumes" Nov 25 12:48:36 crc kubenswrapper[4706]: I1125 12:48:36.090988 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gdrsm/crc-debug-w8n8b"] Nov 25 12:48:36 crc kubenswrapper[4706]: E1125 12:48:36.091460 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="962651b2-14e9-475d-88b4-2a949a2523cb" containerName="container-00" Nov 25 12:48:36 crc kubenswrapper[4706]: I1125 12:48:36.091477 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="962651b2-14e9-475d-88b4-2a949a2523cb" containerName="container-00" Nov 25 12:48:36 crc kubenswrapper[4706]: I1125 12:48:36.091649 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="962651b2-14e9-475d-88b4-2a949a2523cb" containerName="container-00" Nov 25 12:48:36 crc kubenswrapper[4706]: I1125 12:48:36.092258 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gdrsm/crc-debug-w8n8b" Nov 25 12:48:36 crc kubenswrapper[4706]: I1125 12:48:36.094020 4706 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-gdrsm"/"default-dockercfg-rnk7d" Nov 25 12:48:36 crc kubenswrapper[4706]: I1125 12:48:36.154312 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxgss\" (UniqueName: \"kubernetes.io/projected/d1f305fb-4f0f-49d6-84f4-78ea6c65956d-kube-api-access-bxgss\") pod \"crc-debug-w8n8b\" (UID: \"d1f305fb-4f0f-49d6-84f4-78ea6c65956d\") " pod="openshift-must-gather-gdrsm/crc-debug-w8n8b" Nov 25 12:48:36 crc kubenswrapper[4706]: I1125 12:48:36.154737 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d1f305fb-4f0f-49d6-84f4-78ea6c65956d-host\") pod \"crc-debug-w8n8b\" (UID: \"d1f305fb-4f0f-49d6-84f4-78ea6c65956d\") " pod="openshift-must-gather-gdrsm/crc-debug-w8n8b" Nov 25 12:48:36 crc kubenswrapper[4706]: I1125 12:48:36.257116 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxgss\" (UniqueName: \"kubernetes.io/projected/d1f305fb-4f0f-49d6-84f4-78ea6c65956d-kube-api-access-bxgss\") pod \"crc-debug-w8n8b\" (UID: \"d1f305fb-4f0f-49d6-84f4-78ea6c65956d\") " pod="openshift-must-gather-gdrsm/crc-debug-w8n8b" Nov 25 12:48:36 crc kubenswrapper[4706]: I1125 12:48:36.257212 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d1f305fb-4f0f-49d6-84f4-78ea6c65956d-host\") pod \"crc-debug-w8n8b\" (UID: \"d1f305fb-4f0f-49d6-84f4-78ea6c65956d\") " pod="openshift-must-gather-gdrsm/crc-debug-w8n8b" Nov 25 12:48:36 crc kubenswrapper[4706]: I1125 12:48:36.257339 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d1f305fb-4f0f-49d6-84f4-78ea6c65956d-host\") pod \"crc-debug-w8n8b\" (UID: \"d1f305fb-4f0f-49d6-84f4-78ea6c65956d\") " pod="openshift-must-gather-gdrsm/crc-debug-w8n8b" Nov 25 12:48:36 crc kubenswrapper[4706]: I1125 12:48:36.282805 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxgss\" (UniqueName: \"kubernetes.io/projected/d1f305fb-4f0f-49d6-84f4-78ea6c65956d-kube-api-access-bxgss\") pod \"crc-debug-w8n8b\" (UID: \"d1f305fb-4f0f-49d6-84f4-78ea6c65956d\") " pod="openshift-must-gather-gdrsm/crc-debug-w8n8b" Nov 25 12:48:36 crc kubenswrapper[4706]: I1125 12:48:36.413560 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gdrsm/crc-debug-w8n8b" Nov 25 12:48:36 crc kubenswrapper[4706]: I1125 12:48:36.799602 4706 generic.go:334] "Generic (PLEG): container finished" podID="d1f305fb-4f0f-49d6-84f4-78ea6c65956d" containerID="1368b426c1969cfea7152036fff9955145b851f1c87fe254d0043345f0892eb5" exitCode=0 Nov 25 12:48:36 crc kubenswrapper[4706]: I1125 12:48:36.799682 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gdrsm/crc-debug-w8n8b" event={"ID":"d1f305fb-4f0f-49d6-84f4-78ea6c65956d","Type":"ContainerDied","Data":"1368b426c1969cfea7152036fff9955145b851f1c87fe254d0043345f0892eb5"} Nov 25 12:48:36 crc kubenswrapper[4706]: I1125 12:48:36.800062 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gdrsm/crc-debug-w8n8b" event={"ID":"d1f305fb-4f0f-49d6-84f4-78ea6c65956d","Type":"ContainerStarted","Data":"b7451666c3e9a7338d368bae710bb64222c87b8e797dc8d83426a72c2f055942"} Nov 25 12:48:37 crc kubenswrapper[4706]: I1125 12:48:37.387289 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gdrsm/crc-debug-w8n8b"] Nov 25 12:48:37 crc kubenswrapper[4706]: I1125 12:48:37.399838 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gdrsm/crc-debug-w8n8b"] Nov 25 12:48:37 crc kubenswrapper[4706]: I1125 12:48:37.934076 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gdrsm/crc-debug-w8n8b" Nov 25 12:48:37 crc kubenswrapper[4706]: I1125 12:48:37.985318 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxgss\" (UniqueName: \"kubernetes.io/projected/d1f305fb-4f0f-49d6-84f4-78ea6c65956d-kube-api-access-bxgss\") pod \"d1f305fb-4f0f-49d6-84f4-78ea6c65956d\" (UID: \"d1f305fb-4f0f-49d6-84f4-78ea6c65956d\") " Nov 25 12:48:37 crc kubenswrapper[4706]: I1125 12:48:37.985491 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d1f305fb-4f0f-49d6-84f4-78ea6c65956d-host\") pod \"d1f305fb-4f0f-49d6-84f4-78ea6c65956d\" (UID: \"d1f305fb-4f0f-49d6-84f4-78ea6c65956d\") " Nov 25 12:48:37 crc kubenswrapper[4706]: I1125 12:48:37.985542 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1f305fb-4f0f-49d6-84f4-78ea6c65956d-host" (OuterVolumeSpecName: "host") pod "d1f305fb-4f0f-49d6-84f4-78ea6c65956d" (UID: "d1f305fb-4f0f-49d6-84f4-78ea6c65956d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:48:37 crc kubenswrapper[4706]: I1125 12:48:37.985995 4706 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d1f305fb-4f0f-49d6-84f4-78ea6c65956d-host\") on node \"crc\" DevicePath \"\"" Nov 25 12:48:37 crc kubenswrapper[4706]: I1125 12:48:37.991505 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1f305fb-4f0f-49d6-84f4-78ea6c65956d-kube-api-access-bxgss" (OuterVolumeSpecName: "kube-api-access-bxgss") pod "d1f305fb-4f0f-49d6-84f4-78ea6c65956d" (UID: "d1f305fb-4f0f-49d6-84f4-78ea6c65956d"). InnerVolumeSpecName "kube-api-access-bxgss". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:48:38 crc kubenswrapper[4706]: I1125 12:48:38.087332 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxgss\" (UniqueName: \"kubernetes.io/projected/d1f305fb-4f0f-49d6-84f4-78ea6c65956d-kube-api-access-bxgss\") on node \"crc\" DevicePath \"\"" Nov 25 12:48:38 crc kubenswrapper[4706]: I1125 12:48:38.551958 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gdrsm/crc-debug-7hx2z"] Nov 25 12:48:38 crc kubenswrapper[4706]: E1125 12:48:38.552605 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1f305fb-4f0f-49d6-84f4-78ea6c65956d" containerName="container-00" Nov 25 12:48:38 crc kubenswrapper[4706]: I1125 12:48:38.552621 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1f305fb-4f0f-49d6-84f4-78ea6c65956d" containerName="container-00" Nov 25 12:48:38 crc kubenswrapper[4706]: I1125 12:48:38.552835 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1f305fb-4f0f-49d6-84f4-78ea6c65956d" containerName="container-00" Nov 25 12:48:38 crc kubenswrapper[4706]: I1125 12:48:38.553490 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gdrsm/crc-debug-7hx2z" Nov 25 12:48:38 crc kubenswrapper[4706]: I1125 12:48:38.597564 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ad02c6f1-655e-41c4-b6d3-5138dad1356e-host\") pod \"crc-debug-7hx2z\" (UID: \"ad02c6f1-655e-41c4-b6d3-5138dad1356e\") " pod="openshift-must-gather-gdrsm/crc-debug-7hx2z" Nov 25 12:48:38 crc kubenswrapper[4706]: I1125 12:48:38.597738 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjtkj\" (UniqueName: \"kubernetes.io/projected/ad02c6f1-655e-41c4-b6d3-5138dad1356e-kube-api-access-hjtkj\") pod \"crc-debug-7hx2z\" (UID: \"ad02c6f1-655e-41c4-b6d3-5138dad1356e\") " pod="openshift-must-gather-gdrsm/crc-debug-7hx2z" Nov 25 12:48:38 crc kubenswrapper[4706]: I1125 12:48:38.699240 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjtkj\" (UniqueName: \"kubernetes.io/projected/ad02c6f1-655e-41c4-b6d3-5138dad1356e-kube-api-access-hjtkj\") pod \"crc-debug-7hx2z\" (UID: \"ad02c6f1-655e-41c4-b6d3-5138dad1356e\") " pod="openshift-must-gather-gdrsm/crc-debug-7hx2z" Nov 25 12:48:38 crc kubenswrapper[4706]: I1125 12:48:38.699368 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ad02c6f1-655e-41c4-b6d3-5138dad1356e-host\") pod \"crc-debug-7hx2z\" (UID: \"ad02c6f1-655e-41c4-b6d3-5138dad1356e\") " pod="openshift-must-gather-gdrsm/crc-debug-7hx2z" Nov 25 12:48:38 crc kubenswrapper[4706]: I1125 12:48:38.699471 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ad02c6f1-655e-41c4-b6d3-5138dad1356e-host\") pod \"crc-debug-7hx2z\" (UID: \"ad02c6f1-655e-41c4-b6d3-5138dad1356e\") " pod="openshift-must-gather-gdrsm/crc-debug-7hx2z" Nov 25 12:48:38 crc kubenswrapper[4706]: I1125 12:48:38.718157 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjtkj\" (UniqueName: \"kubernetes.io/projected/ad02c6f1-655e-41c4-b6d3-5138dad1356e-kube-api-access-hjtkj\") pod \"crc-debug-7hx2z\" (UID: \"ad02c6f1-655e-41c4-b6d3-5138dad1356e\") " pod="openshift-must-gather-gdrsm/crc-debug-7hx2z" Nov 25 12:48:38 crc kubenswrapper[4706]: I1125 12:48:38.823200 4706 scope.go:117] "RemoveContainer" containerID="1368b426c1969cfea7152036fff9955145b851f1c87fe254d0043345f0892eb5" Nov 25 12:48:38 crc kubenswrapper[4706]: I1125 12:48:38.823230 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gdrsm/crc-debug-w8n8b" Nov 25 12:48:38 crc kubenswrapper[4706]: I1125 12:48:38.872179 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gdrsm/crc-debug-7hx2z" Nov 25 12:48:38 crc kubenswrapper[4706]: W1125 12:48:38.900433 4706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad02c6f1_655e_41c4_b6d3_5138dad1356e.slice/crio-85164215a8befcc984b883fd2a18ffb29e9daacc5236d2d0c7aa917df52828a4 WatchSource:0}: Error finding container 85164215a8befcc984b883fd2a18ffb29e9daacc5236d2d0c7aa917df52828a4: Status 404 returned error can't find the container with id 85164215a8befcc984b883fd2a18ffb29e9daacc5236d2d0c7aa917df52828a4 Nov 25 12:48:39 crc kubenswrapper[4706]: I1125 12:48:39.849961 4706 generic.go:334] "Generic (PLEG): container finished" podID="ad02c6f1-655e-41c4-b6d3-5138dad1356e" containerID="51fbea837caa7f3aa2304867b7f1531ceded82c735bcb316611edd1d77c7b619" exitCode=0 Nov 25 12:48:39 crc kubenswrapper[4706]: I1125 12:48:39.850044 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gdrsm/crc-debug-7hx2z" event={"ID":"ad02c6f1-655e-41c4-b6d3-5138dad1356e","Type":"ContainerDied","Data":"51fbea837caa7f3aa2304867b7f1531ceded82c735bcb316611edd1d77c7b619"} Nov 25 12:48:39 crc kubenswrapper[4706]: I1125 12:48:39.850134 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gdrsm/crc-debug-7hx2z" event={"ID":"ad02c6f1-655e-41c4-b6d3-5138dad1356e","Type":"ContainerStarted","Data":"85164215a8befcc984b883fd2a18ffb29e9daacc5236d2d0c7aa917df52828a4"} Nov 25 12:48:39 crc kubenswrapper[4706]: I1125 12:48:39.898208 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gdrsm/crc-debug-7hx2z"] Nov 25 12:48:39 crc kubenswrapper[4706]: I1125 12:48:39.913057 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gdrsm/crc-debug-7hx2z"] Nov 25 12:48:39 crc kubenswrapper[4706]: I1125 12:48:39.933184 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1f305fb-4f0f-49d6-84f4-78ea6c65956d" path="/var/lib/kubelet/pods/d1f305fb-4f0f-49d6-84f4-78ea6c65956d/volumes" Nov 25 12:48:40 crc kubenswrapper[4706]: I1125 12:48:40.970602 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gdrsm/crc-debug-7hx2z" Nov 25 12:48:41 crc kubenswrapper[4706]: I1125 12:48:41.044091 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjtkj\" (UniqueName: \"kubernetes.io/projected/ad02c6f1-655e-41c4-b6d3-5138dad1356e-kube-api-access-hjtkj\") pod \"ad02c6f1-655e-41c4-b6d3-5138dad1356e\" (UID: \"ad02c6f1-655e-41c4-b6d3-5138dad1356e\") " Nov 25 12:48:41 crc kubenswrapper[4706]: I1125 12:48:41.044323 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ad02c6f1-655e-41c4-b6d3-5138dad1356e-host\") pod \"ad02c6f1-655e-41c4-b6d3-5138dad1356e\" (UID: \"ad02c6f1-655e-41c4-b6d3-5138dad1356e\") " Nov 25 12:48:41 crc kubenswrapper[4706]: I1125 12:48:41.044973 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad02c6f1-655e-41c4-b6d3-5138dad1356e-host" (OuterVolumeSpecName: "host") pod "ad02c6f1-655e-41c4-b6d3-5138dad1356e" (UID: "ad02c6f1-655e-41c4-b6d3-5138dad1356e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:48:41 crc kubenswrapper[4706]: I1125 12:48:41.045831 4706 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ad02c6f1-655e-41c4-b6d3-5138dad1356e-host\") on node \"crc\" DevicePath \"\"" Nov 25 12:48:41 crc kubenswrapper[4706]: I1125 12:48:41.050479 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad02c6f1-655e-41c4-b6d3-5138dad1356e-kube-api-access-hjtkj" (OuterVolumeSpecName: "kube-api-access-hjtkj") pod "ad02c6f1-655e-41c4-b6d3-5138dad1356e" (UID: "ad02c6f1-655e-41c4-b6d3-5138dad1356e"). InnerVolumeSpecName "kube-api-access-hjtkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:48:41 crc kubenswrapper[4706]: I1125 12:48:41.149036 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjtkj\" (UniqueName: \"kubernetes.io/projected/ad02c6f1-655e-41c4-b6d3-5138dad1356e-kube-api-access-hjtkj\") on node \"crc\" DevicePath \"\"" Nov 25 12:48:41 crc kubenswrapper[4706]: I1125 12:48:41.876932 4706 scope.go:117] "RemoveContainer" containerID="51fbea837caa7f3aa2304867b7f1531ceded82c735bcb316611edd1d77c7b619" Nov 25 12:48:41 crc kubenswrapper[4706]: I1125 12:48:41.876997 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gdrsm/crc-debug-7hx2z" Nov 25 12:48:41 crc kubenswrapper[4706]: I1125 12:48:41.940577 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad02c6f1-655e-41c4-b6d3-5138dad1356e" path="/var/lib/kubelet/pods/ad02c6f1-655e-41c4-b6d3-5138dad1356e/volumes" Nov 25 12:48:43 crc kubenswrapper[4706]: I1125 12:48:43.238811 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-njh9w"] Nov 25 12:48:43 crc kubenswrapper[4706]: E1125 12:48:43.239463 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad02c6f1-655e-41c4-b6d3-5138dad1356e" containerName="container-00" Nov 25 12:48:43 crc kubenswrapper[4706]: I1125 12:48:43.239476 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad02c6f1-655e-41c4-b6d3-5138dad1356e" containerName="container-00" Nov 25 12:48:43 crc kubenswrapper[4706]: I1125 12:48:43.239702 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad02c6f1-655e-41c4-b6d3-5138dad1356e" containerName="container-00" Nov 25 12:48:43 crc kubenswrapper[4706]: I1125 12:48:43.241031 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-njh9w" Nov 25 12:48:43 crc kubenswrapper[4706]: I1125 12:48:43.250359 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-njh9w"] Nov 25 12:48:43 crc kubenswrapper[4706]: I1125 12:48:43.388393 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/755fd1a7-2b9b-497f-af7d-81ff7b55bceb-catalog-content\") pod \"redhat-operators-njh9w\" (UID: \"755fd1a7-2b9b-497f-af7d-81ff7b55bceb\") " pod="openshift-marketplace/redhat-operators-njh9w" Nov 25 12:48:43 crc kubenswrapper[4706]: I1125 12:48:43.388534 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6495t\" (UniqueName: \"kubernetes.io/projected/755fd1a7-2b9b-497f-af7d-81ff7b55bceb-kube-api-access-6495t\") pod \"redhat-operators-njh9w\" (UID: \"755fd1a7-2b9b-497f-af7d-81ff7b55bceb\") " pod="openshift-marketplace/redhat-operators-njh9w" Nov 25 12:48:43 crc kubenswrapper[4706]: I1125 12:48:43.389099 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/755fd1a7-2b9b-497f-af7d-81ff7b55bceb-utilities\") pod \"redhat-operators-njh9w\" (UID: \"755fd1a7-2b9b-497f-af7d-81ff7b55bceb\") " pod="openshift-marketplace/redhat-operators-njh9w" Nov 25 12:48:43 crc kubenswrapper[4706]: I1125 12:48:43.490136 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/755fd1a7-2b9b-497f-af7d-81ff7b55bceb-utilities\") pod \"redhat-operators-njh9w\" (UID: \"755fd1a7-2b9b-497f-af7d-81ff7b55bceb\") " pod="openshift-marketplace/redhat-operators-njh9w" Nov 25 12:48:43 crc kubenswrapper[4706]: I1125 12:48:43.490196 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/755fd1a7-2b9b-497f-af7d-81ff7b55bceb-catalog-content\") pod \"redhat-operators-njh9w\" (UID: \"755fd1a7-2b9b-497f-af7d-81ff7b55bceb\") " pod="openshift-marketplace/redhat-operators-njh9w" Nov 25 12:48:43 crc kubenswrapper[4706]: I1125 12:48:43.490237 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6495t\" (UniqueName: \"kubernetes.io/projected/755fd1a7-2b9b-497f-af7d-81ff7b55bceb-kube-api-access-6495t\") pod \"redhat-operators-njh9w\" (UID: \"755fd1a7-2b9b-497f-af7d-81ff7b55bceb\") " pod="openshift-marketplace/redhat-operators-njh9w" Nov 25 12:48:43 crc kubenswrapper[4706]: I1125 12:48:43.490736 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/755fd1a7-2b9b-497f-af7d-81ff7b55bceb-utilities\") pod \"redhat-operators-njh9w\" (UID: \"755fd1a7-2b9b-497f-af7d-81ff7b55bceb\") " pod="openshift-marketplace/redhat-operators-njh9w" Nov 25 12:48:43 crc kubenswrapper[4706]: I1125 12:48:43.490742 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/755fd1a7-2b9b-497f-af7d-81ff7b55bceb-catalog-content\") pod \"redhat-operators-njh9w\" (UID: \"755fd1a7-2b9b-497f-af7d-81ff7b55bceb\") " pod="openshift-marketplace/redhat-operators-njh9w" Nov 25 12:48:43 crc kubenswrapper[4706]: I1125 12:48:43.516348 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6495t\" (UniqueName: \"kubernetes.io/projected/755fd1a7-2b9b-497f-af7d-81ff7b55bceb-kube-api-access-6495t\") pod \"redhat-operators-njh9w\" (UID: \"755fd1a7-2b9b-497f-af7d-81ff7b55bceb\") " pod="openshift-marketplace/redhat-operators-njh9w" Nov 25 12:48:43 crc kubenswrapper[4706]: I1125 12:48:43.564088 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-njh9w" Nov 25 12:48:44 crc kubenswrapper[4706]: I1125 12:48:44.104744 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-njh9w"] Nov 25 12:48:44 crc kubenswrapper[4706]: I1125 12:48:44.911514 4706 generic.go:334] "Generic (PLEG): container finished" podID="755fd1a7-2b9b-497f-af7d-81ff7b55bceb" containerID="c9e5b40a24f32f67a3e9d38114aebe6418fc01cf87d05fc1758d973e6d15522d" exitCode=0 Nov 25 12:48:44 crc kubenswrapper[4706]: I1125 12:48:44.911605 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njh9w" event={"ID":"755fd1a7-2b9b-497f-af7d-81ff7b55bceb","Type":"ContainerDied","Data":"c9e5b40a24f32f67a3e9d38114aebe6418fc01cf87d05fc1758d973e6d15522d"} Nov 25 12:48:44 crc kubenswrapper[4706]: I1125 12:48:44.911965 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njh9w" event={"ID":"755fd1a7-2b9b-497f-af7d-81ff7b55bceb","Type":"ContainerStarted","Data":"76933c0fe77ec3872f3f651653113960263f439d2b33e292611e151546cc785b"} Nov 25 12:48:45 crc kubenswrapper[4706]: I1125 12:48:45.931357 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njh9w" event={"ID":"755fd1a7-2b9b-497f-af7d-81ff7b55bceb","Type":"ContainerStarted","Data":"cddf261f7f3e27e23247400707a1084e61f38cab82a387d9760f79bbc2166b24"} Nov 25 12:48:48 crc kubenswrapper[4706]: I1125 12:48:48.950351 4706 generic.go:334] "Generic (PLEG): container finished" podID="755fd1a7-2b9b-497f-af7d-81ff7b55bceb" containerID="cddf261f7f3e27e23247400707a1084e61f38cab82a387d9760f79bbc2166b24" exitCode=0 Nov 25 12:48:48 crc kubenswrapper[4706]: I1125 12:48:48.950413 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njh9w" event={"ID":"755fd1a7-2b9b-497f-af7d-81ff7b55bceb","Type":"ContainerDied","Data":"cddf261f7f3e27e23247400707a1084e61f38cab82a387d9760f79bbc2166b24"} Nov 25 12:48:49 crc kubenswrapper[4706]: I1125 12:48:49.961944 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njh9w" event={"ID":"755fd1a7-2b9b-497f-af7d-81ff7b55bceb","Type":"ContainerStarted","Data":"bbc002d7526f08b824410e3bc46d05ffa11a5abf2fd7fecff47716f23c1f52de"} Nov 25 12:48:49 crc kubenswrapper[4706]: I1125 12:48:49.988578 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-njh9w" podStartSLOduration=2.540639238 podStartE2EDuration="6.988561785s" podCreationTimestamp="2025-11-25 12:48:43 +0000 UTC" firstStartedPulling="2025-11-25 12:48:44.914238001 +0000 UTC m=+4333.828795382" lastFinishedPulling="2025-11-25 12:48:49.362160548 +0000 UTC m=+4338.276717929" observedRunningTime="2025-11-25 12:48:49.979794155 +0000 UTC m=+4338.894351536" watchObservedRunningTime="2025-11-25 12:48:49.988561785 +0000 UTC m=+4338.903119166" Nov 25 12:48:53 crc kubenswrapper[4706]: I1125 12:48:53.565157 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-njh9w" Nov 25 12:48:53 crc kubenswrapper[4706]: I1125 12:48:53.565720 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-njh9w" Nov 25 12:48:54 crc kubenswrapper[4706]: I1125 12:48:54.647492 4706 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-njh9w" podUID="755fd1a7-2b9b-497f-af7d-81ff7b55bceb" containerName="registry-server" probeResult="failure" output=< Nov 25 12:48:54 crc kubenswrapper[4706]: timeout: failed to connect service ":50051" within 1s Nov 25 12:48:54 crc kubenswrapper[4706]: > Nov 25 12:49:01 crc kubenswrapper[4706]: I1125 12:49:01.125670 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:49:01 crc kubenswrapper[4706]: I1125 12:49:01.128963 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:49:03 crc kubenswrapper[4706]: I1125 12:49:03.612845 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-njh9w" Nov 25 12:49:03 crc kubenswrapper[4706]: I1125 12:49:03.663145 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-njh9w" Nov 25 12:49:03 crc kubenswrapper[4706]: I1125 12:49:03.851007 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-njh9w"] Nov 25 12:49:05 crc kubenswrapper[4706]: I1125 12:49:05.100970 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-njh9w" podUID="755fd1a7-2b9b-497f-af7d-81ff7b55bceb" containerName="registry-server" containerID="cri-o://bbc002d7526f08b824410e3bc46d05ffa11a5abf2fd7fecff47716f23c1f52de" gracePeriod=2 Nov 25 12:49:05 crc kubenswrapper[4706]: I1125 12:49:05.169350 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-85c7db76fd-f64jq_500c37cc-45dd-444f-a630-19356ac8d1e3/barbican-api/0.log" Nov 25 12:49:05 crc kubenswrapper[4706]: I1125 12:49:05.176158 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-85c7db76fd-f64jq_500c37cc-45dd-444f-a630-19356ac8d1e3/barbican-api-log/0.log" Nov 25 12:49:05 crc kubenswrapper[4706]: I1125 12:49:05.424458 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6c9c496566-jrgpl_2ea4caef-6e53-42ac-9202-cf4b05a28041/barbican-keystone-listener-log/0.log" Nov 25 12:49:05 crc kubenswrapper[4706]: I1125 12:49:05.429004 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6c9c496566-jrgpl_2ea4caef-6e53-42ac-9202-cf4b05a28041/barbican-keystone-listener/0.log" Nov 25 12:49:05 crc kubenswrapper[4706]: I1125 12:49:05.580532 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7fc64dc5d7-m6cqm_ac9c3625-3935-48b4-abf3-a8330d99152d/barbican-worker/0.log" Nov 25 12:49:05 crc kubenswrapper[4706]: I1125 12:49:05.733275 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-njh9w" Nov 25 12:49:05 crc kubenswrapper[4706]: I1125 12:49:05.753942 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7fc64dc5d7-m6cqm_ac9c3625-3935-48b4-abf3-a8330d99152d/barbican-worker-log/0.log" Nov 25 12:49:05 crc kubenswrapper[4706]: I1125 12:49:05.776358 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-ntv4r_50dff0a2-b50d-43ee-8951-e49958b3cd5a/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 12:49:05 crc kubenswrapper[4706]: I1125 12:49:05.840986 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6495t\" (UniqueName: \"kubernetes.io/projected/755fd1a7-2b9b-497f-af7d-81ff7b55bceb-kube-api-access-6495t\") pod \"755fd1a7-2b9b-497f-af7d-81ff7b55bceb\" (UID: \"755fd1a7-2b9b-497f-af7d-81ff7b55bceb\") " Nov 25 12:49:05 crc kubenswrapper[4706]: I1125 12:49:05.841071 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/755fd1a7-2b9b-497f-af7d-81ff7b55bceb-utilities\") pod \"755fd1a7-2b9b-497f-af7d-81ff7b55bceb\" (UID: \"755fd1a7-2b9b-497f-af7d-81ff7b55bceb\") " Nov 25 12:49:05 crc kubenswrapper[4706]: I1125 12:49:05.841119 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/755fd1a7-2b9b-497f-af7d-81ff7b55bceb-catalog-content\") pod \"755fd1a7-2b9b-497f-af7d-81ff7b55bceb\" (UID: \"755fd1a7-2b9b-497f-af7d-81ff7b55bceb\") " Nov 25 12:49:05 crc kubenswrapper[4706]: I1125 12:49:05.842053 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/755fd1a7-2b9b-497f-af7d-81ff7b55bceb-utilities" (OuterVolumeSpecName: "utilities") pod "755fd1a7-2b9b-497f-af7d-81ff7b55bceb" (UID: "755fd1a7-2b9b-497f-af7d-81ff7b55bceb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:49:05 crc kubenswrapper[4706]: I1125 12:49:05.847874 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/755fd1a7-2b9b-497f-af7d-81ff7b55bceb-kube-api-access-6495t" (OuterVolumeSpecName: "kube-api-access-6495t") pod "755fd1a7-2b9b-497f-af7d-81ff7b55bceb" (UID: "755fd1a7-2b9b-497f-af7d-81ff7b55bceb"). InnerVolumeSpecName "kube-api-access-6495t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:49:05 crc kubenswrapper[4706]: I1125 12:49:05.943465 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6495t\" (UniqueName: \"kubernetes.io/projected/755fd1a7-2b9b-497f-af7d-81ff7b55bceb-kube-api-access-6495t\") on node \"crc\" DevicePath \"\"" Nov 25 12:49:05 crc kubenswrapper[4706]: I1125 12:49:05.943503 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/755fd1a7-2b9b-497f-af7d-81ff7b55bceb-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:49:05 crc kubenswrapper[4706]: I1125 12:49:05.944875 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/755fd1a7-2b9b-497f-af7d-81ff7b55bceb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "755fd1a7-2b9b-497f-af7d-81ff7b55bceb" (UID: "755fd1a7-2b9b-497f-af7d-81ff7b55bceb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:49:05 crc kubenswrapper[4706]: I1125 12:49:05.998072 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_340a9043-f74e-40cb-aeea-bbcabe4d865f/ceilometer-central-agent/0.log" Nov 25 12:49:06 crc kubenswrapper[4706]: I1125 12:49:06.045865 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/755fd1a7-2b9b-497f-af7d-81ff7b55bceb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:49:06 crc kubenswrapper[4706]: I1125 12:49:06.083247 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_340a9043-f74e-40cb-aeea-bbcabe4d865f/proxy-httpd/0.log" Nov 25 12:49:06 crc kubenswrapper[4706]: I1125 12:49:06.115210 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_340a9043-f74e-40cb-aeea-bbcabe4d865f/ceilometer-notification-agent/0.log" Nov 25 12:49:06 crc kubenswrapper[4706]: I1125 12:49:06.118343 4706 generic.go:334] "Generic (PLEG): container finished" podID="755fd1a7-2b9b-497f-af7d-81ff7b55bceb" containerID="bbc002d7526f08b824410e3bc46d05ffa11a5abf2fd7fecff47716f23c1f52de" exitCode=0 Nov 25 12:49:06 crc kubenswrapper[4706]: I1125 12:49:06.118409 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njh9w" event={"ID":"755fd1a7-2b9b-497f-af7d-81ff7b55bceb","Type":"ContainerDied","Data":"bbc002d7526f08b824410e3bc46d05ffa11a5abf2fd7fecff47716f23c1f52de"} Nov 25 12:49:06 crc kubenswrapper[4706]: I1125 12:49:06.118466 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njh9w" event={"ID":"755fd1a7-2b9b-497f-af7d-81ff7b55bceb","Type":"ContainerDied","Data":"76933c0fe77ec3872f3f651653113960263f439d2b33e292611e151546cc785b"} Nov 25 12:49:06 crc kubenswrapper[4706]: I1125 12:49:06.118491 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-njh9w" Nov 25 12:49:06 crc kubenswrapper[4706]: I1125 12:49:06.118498 4706 scope.go:117] "RemoveContainer" containerID="bbc002d7526f08b824410e3bc46d05ffa11a5abf2fd7fecff47716f23c1f52de" Nov 25 12:49:06 crc kubenswrapper[4706]: I1125 12:49:06.145117 4706 scope.go:117] "RemoveContainer" containerID="cddf261f7f3e27e23247400707a1084e61f38cab82a387d9760f79bbc2166b24" Nov 25 12:49:06 crc kubenswrapper[4706]: I1125 12:49:06.171572 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-njh9w"] Nov 25 12:49:06 crc kubenswrapper[4706]: I1125 12:49:06.194780 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-njh9w"] Nov 25 12:49:06 crc kubenswrapper[4706]: I1125 12:49:06.207527 4706 scope.go:117] "RemoveContainer" containerID="c9e5b40a24f32f67a3e9d38114aebe6418fc01cf87d05fc1758d973e6d15522d" Nov 25 12:49:06 crc kubenswrapper[4706]: I1125 12:49:06.219881 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_340a9043-f74e-40cb-aeea-bbcabe4d865f/sg-core/0.log" Nov 25 12:49:06 crc kubenswrapper[4706]: I1125 12:49:06.257386 4706 scope.go:117] "RemoveContainer" containerID="bbc002d7526f08b824410e3bc46d05ffa11a5abf2fd7fecff47716f23c1f52de" Nov 25 12:49:06 crc kubenswrapper[4706]: E1125 12:49:06.259626 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbc002d7526f08b824410e3bc46d05ffa11a5abf2fd7fecff47716f23c1f52de\": container with ID starting with bbc002d7526f08b824410e3bc46d05ffa11a5abf2fd7fecff47716f23c1f52de not found: ID does not exist" containerID="bbc002d7526f08b824410e3bc46d05ffa11a5abf2fd7fecff47716f23c1f52de" Nov 25 12:49:06 crc kubenswrapper[4706]: I1125 12:49:06.259694 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbc002d7526f08b824410e3bc46d05ffa11a5abf2fd7fecff47716f23c1f52de"} err="failed to get container status \"bbc002d7526f08b824410e3bc46d05ffa11a5abf2fd7fecff47716f23c1f52de\": rpc error: code = NotFound desc = could not find container \"bbc002d7526f08b824410e3bc46d05ffa11a5abf2fd7fecff47716f23c1f52de\": container with ID starting with bbc002d7526f08b824410e3bc46d05ffa11a5abf2fd7fecff47716f23c1f52de not found: ID does not exist" Nov 25 12:49:06 crc kubenswrapper[4706]: I1125 12:49:06.259742 4706 scope.go:117] "RemoveContainer" containerID="cddf261f7f3e27e23247400707a1084e61f38cab82a387d9760f79bbc2166b24" Nov 25 12:49:06 crc kubenswrapper[4706]: E1125 12:49:06.262125 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cddf261f7f3e27e23247400707a1084e61f38cab82a387d9760f79bbc2166b24\": container with ID starting with cddf261f7f3e27e23247400707a1084e61f38cab82a387d9760f79bbc2166b24 not found: ID does not exist" containerID="cddf261f7f3e27e23247400707a1084e61f38cab82a387d9760f79bbc2166b24" Nov 25 12:49:06 crc kubenswrapper[4706]: I1125 12:49:06.262160 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cddf261f7f3e27e23247400707a1084e61f38cab82a387d9760f79bbc2166b24"} err="failed to get container status \"cddf261f7f3e27e23247400707a1084e61f38cab82a387d9760f79bbc2166b24\": rpc error: code = NotFound desc = could not find container \"cddf261f7f3e27e23247400707a1084e61f38cab82a387d9760f79bbc2166b24\": container with ID starting with cddf261f7f3e27e23247400707a1084e61f38cab82a387d9760f79bbc2166b24 not found: ID does not exist" Nov 25 12:49:06 crc kubenswrapper[4706]: I1125 12:49:06.262180 4706 scope.go:117] "RemoveContainer" containerID="c9e5b40a24f32f67a3e9d38114aebe6418fc01cf87d05fc1758d973e6d15522d" Nov 25 12:49:06 crc kubenswrapper[4706]: E1125 12:49:06.263048 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9e5b40a24f32f67a3e9d38114aebe6418fc01cf87d05fc1758d973e6d15522d\": container with ID starting with c9e5b40a24f32f67a3e9d38114aebe6418fc01cf87d05fc1758d973e6d15522d not found: ID does not exist" containerID="c9e5b40a24f32f67a3e9d38114aebe6418fc01cf87d05fc1758d973e6d15522d" Nov 25 12:49:06 crc kubenswrapper[4706]: I1125 12:49:06.263235 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9e5b40a24f32f67a3e9d38114aebe6418fc01cf87d05fc1758d973e6d15522d"} err="failed to get container status \"c9e5b40a24f32f67a3e9d38114aebe6418fc01cf87d05fc1758d973e6d15522d\": rpc error: code = NotFound desc = could not find container \"c9e5b40a24f32f67a3e9d38114aebe6418fc01cf87d05fc1758d973e6d15522d\": container with ID starting with c9e5b40a24f32f67a3e9d38114aebe6418fc01cf87d05fc1758d973e6d15522d not found: ID does not exist" Nov 25 12:49:06 crc kubenswrapper[4706]: I1125 12:49:06.401478 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3f35fbd6-a7c7-4d44-af30-601512a5dfa4/cinder-api-log/0.log" Nov 25 12:49:06 crc kubenswrapper[4706]: I1125 12:49:06.476770 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3f35fbd6-a7c7-4d44-af30-601512a5dfa4/cinder-api/0.log" Nov 25 12:49:06 crc kubenswrapper[4706]: I1125 12:49:06.607968 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_f4dd78e0-575d-4188-b6f5-17ab8a12383c/cinder-scheduler/0.log" Nov 25 12:49:06 crc kubenswrapper[4706]: I1125 12:49:06.680766 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_f4dd78e0-575d-4188-b6f5-17ab8a12383c/probe/0.log" Nov 25 12:49:06 crc kubenswrapper[4706]: I1125 12:49:06.848103 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-wtp98_81138548-0b1d-43b6-af7c-fdf31598a28d/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 12:49:06 crc kubenswrapper[4706]: I1125 12:49:06.918243 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-h4crd_04cc6fd1-5a4f-4d7d-aed4-849709bb005d/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 12:49:07 crc kubenswrapper[4706]: I1125 12:49:07.078605 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-777cf_3ab6dcdf-bba1-4c4c-aa91-47a06fd22366/init/0.log" Nov 25 12:49:07 crc kubenswrapper[4706]: I1125 12:49:07.301760 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-9hvc8_c905bf42-3156-4c1f-8f93-4ab4c0141fdd/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 12:49:07 crc kubenswrapper[4706]: I1125 12:49:07.338422 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-777cf_3ab6dcdf-bba1-4c4c-aa91-47a06fd22366/dnsmasq-dns/0.log" Nov 25 12:49:07 crc kubenswrapper[4706]: I1125 12:49:07.360060 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-777cf_3ab6dcdf-bba1-4c4c-aa91-47a06fd22366/init/0.log" Nov 25 12:49:07 crc kubenswrapper[4706]: I1125 12:49:07.562259 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d0c5bfae-397f-432d-bdb6-8bb27d43f68c/glance-httpd/0.log" Nov 25 12:49:07 crc kubenswrapper[4706]: I1125 12:49:07.569783 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d0c5bfae-397f-432d-bdb6-8bb27d43f68c/glance-log/0.log" Nov 25 12:49:07 crc kubenswrapper[4706]: I1125 12:49:07.741953 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_56ae92e0-a5ff-4b66-b471-6e38781e51da/glance-httpd/0.log" Nov 25 12:49:07 crc kubenswrapper[4706]: I1125 12:49:07.764325 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_56ae92e0-a5ff-4b66-b471-6e38781e51da/glance-log/0.log" Nov 25 12:49:07 crc kubenswrapper[4706]: I1125 12:49:07.939675 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="755fd1a7-2b9b-497f-af7d-81ff7b55bceb" path="/var/lib/kubelet/pods/755fd1a7-2b9b-497f-af7d-81ff7b55bceb/volumes" Nov 25 12:49:07 crc kubenswrapper[4706]: I1125 12:49:07.981495 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-85664bf4f6-ws67w_66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5/horizon/0.log" Nov 25 12:49:08 crc kubenswrapper[4706]: I1125 12:49:08.084865 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-595gj_baaa73b2-135d-4ce5-8e1a-4c7ffde4e639/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 12:49:08 crc kubenswrapper[4706]: I1125 12:49:08.251852 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-zlncj_5f5a244b-95ce-4443-9951-780763117499/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 12:49:08 crc kubenswrapper[4706]: I1125 12:49:08.362197 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-85664bf4f6-ws67w_66bfb4a4-e60d-4f75-ad0b-1ad3e8ff1bf5/horizon-log/0.log" Nov 25 12:49:08 crc kubenswrapper[4706]: I1125 12:49:08.526223 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29401201-6qr5x_6e578ce4-062a-47d6-ad7e-c1e36d257077/keystone-cron/0.log" Nov 25 12:49:08 crc kubenswrapper[4706]: I1125 12:49:08.592014 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-854bff779d-k8bjv_df1ddb84-cafd-4f7f-b1cf-c6fb37b7e92e/keystone-api/0.log" Nov 25 12:49:08 crc kubenswrapper[4706]: I1125 12:49:08.739213 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_04e7a5d0-b5fe-4a58-b015-339cc1218c6e/kube-state-metrics/3.log" Nov 25 12:49:08 crc kubenswrapper[4706]: I1125 12:49:08.800120 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_04e7a5d0-b5fe-4a58-b015-339cc1218c6e/kube-state-metrics/2.log" Nov 25 12:49:08 crc kubenswrapper[4706]: I1125 12:49:08.844034 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-g6fp7_90e48cbb-dd1b-466b-a72f-5e2913554a5b/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 12:49:09 crc kubenswrapper[4706]: I1125 12:49:09.202592 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7964f7f8cc-7zjzw_b108b69d-0dd8-4945-aa38-c2caee99bac1/neutron-httpd/0.log" Nov 25 12:49:09 crc kubenswrapper[4706]: I1125 12:49:09.240366 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-q68jk_5686661c-4510-41ab-aed3-7ab5fa576b60/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 12:49:09 crc kubenswrapper[4706]: I1125 12:49:09.274963 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7964f7f8cc-7zjzw_b108b69d-0dd8-4945-aa38-c2caee99bac1/neutron-api/0.log" Nov 25 12:49:09 crc kubenswrapper[4706]: I1125 12:49:09.888952 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_0608285b-d97c-42b6-abc5-32cff6509d9e/nova-api-log/0.log" Nov 25 12:49:09 crc kubenswrapper[4706]: I1125 12:49:09.914858 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_f550fc56-7c91-4ca6-b10e-6394166b34c8/nova-cell0-conductor-conductor/0.log" Nov 25 12:49:10 crc kubenswrapper[4706]: I1125 12:49:10.183457 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_125dfab1-ad73-40ed-bd12-3e061e6b0ec2/nova-cell1-conductor-conductor/0.log" Nov 25 12:49:10 crc kubenswrapper[4706]: I1125 12:49:10.394572 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_0608285b-d97c-42b6-abc5-32cff6509d9e/nova-api-api/0.log" Nov 25 12:49:10 crc kubenswrapper[4706]: I1125 12:49:10.443249 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_562e456e-a719-47cb-b220-06ccb6fc06cc/nova-cell1-novncproxy-novncproxy/0.log" Nov 25 12:49:10 crc kubenswrapper[4706]: I1125 12:49:10.470330 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-67xt7_f74a1106-ae1e-464c-a761-dc47c54c361c/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 12:49:10 crc kubenswrapper[4706]: I1125 12:49:10.653453 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_4169a8fb-29dd-4d0a-851f-58055dcfff18/nova-metadata-log/0.log" Nov 25 12:49:10 crc kubenswrapper[4706]: I1125 12:49:10.995958 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_49e77cd2-5940-4ae6-9418-d069ce012ad7/mysql-bootstrap/0.log" Nov 25 12:49:11 crc kubenswrapper[4706]: I1125 12:49:11.136894 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_49e77cd2-5940-4ae6-9418-d069ce012ad7/mysql-bootstrap/0.log" Nov 25 12:49:11 crc kubenswrapper[4706]: I1125 12:49:11.156095 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_dea70033-299d-4ca8-9249-c909449f24c9/nova-scheduler-scheduler/0.log" Nov 25 12:49:11 crc kubenswrapper[4706]: I1125 12:49:11.188629 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_49e77cd2-5940-4ae6-9418-d069ce012ad7/galera/0.log" Nov 25 12:49:11 crc kubenswrapper[4706]: I1125 12:49:11.387829 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_64ca6766-8491-40bc-a14e-eb866edf3fe8/mysql-bootstrap/0.log" Nov 25 12:49:11 crc kubenswrapper[4706]: I1125 12:49:11.588725 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_64ca6766-8491-40bc-a14e-eb866edf3fe8/mysql-bootstrap/0.log" Nov 25 12:49:11 crc kubenswrapper[4706]: I1125 12:49:11.613957 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_64ca6766-8491-40bc-a14e-eb866edf3fe8/galera/0.log" Nov 25 12:49:11 crc kubenswrapper[4706]: I1125 12:49:11.815007 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_b8a85f10-0dcd-42f8-a4bc-f0b25f59cfe8/openstackclient/0.log" Nov 25 12:49:11 crc kubenswrapper[4706]: I1125 12:49:11.874499 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-kd65v_23b72526-ef77-4128-a880-6df46f5db440/ovn-controller/0.log" Nov 25 12:49:12 crc kubenswrapper[4706]: I1125 12:49:12.066050 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-9sjfp_39f1459f-1764-4a48-8363-b32ac9350cdb/openstack-network-exporter/0.log" Nov 25 12:49:12 crc kubenswrapper[4706]: I1125 12:49:12.262937 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_4169a8fb-29dd-4d0a-851f-58055dcfff18/nova-metadata-metadata/0.log" Nov 25 12:49:12 crc kubenswrapper[4706]: I1125 12:49:12.301457 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-q8rmg_a2035192-0066-4761-b5a8-2684c95f20ff/ovsdb-server-init/0.log" Nov 25 12:49:13 crc kubenswrapper[4706]: I1125 12:49:13.132651 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-q8rmg_a2035192-0066-4761-b5a8-2684c95f20ff/ovs-vswitchd/0.log" Nov 25 12:49:13 crc kubenswrapper[4706]: I1125 12:49:13.136776 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-q8rmg_a2035192-0066-4761-b5a8-2684c95f20ff/ovsdb-server-init/0.log" Nov 25 12:49:13 crc kubenswrapper[4706]: I1125 12:49:13.173033 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-q8rmg_a2035192-0066-4761-b5a8-2684c95f20ff/ovsdb-server/0.log" Nov 25 12:49:13 crc kubenswrapper[4706]: I1125 12:49:13.330252 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-6kxnq_97dd7a8b-3605-49a2-ad4d-72dd946605aa/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 12:49:13 crc kubenswrapper[4706]: I1125 12:49:13.334018 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_655006b1-956d-49e9-b15f-c00cd945c024/openstack-network-exporter/0.log" Nov 25 12:49:13 crc kubenswrapper[4706]: I1125 12:49:13.395911 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_655006b1-956d-49e9-b15f-c00cd945c024/ovn-northd/0.log" Nov 25 12:49:13 crc kubenswrapper[4706]: I1125 12:49:13.577393 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3c49be9b-0e12-4db2-82be-3415441f57d4/openstack-network-exporter/0.log" Nov 25 12:49:13 crc kubenswrapper[4706]: I1125 12:49:13.596021 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3c49be9b-0e12-4db2-82be-3415441f57d4/ovsdbserver-nb/0.log" Nov 25 12:49:13 crc kubenswrapper[4706]: I1125 12:49:13.767177 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_752cf7db-684f-4a5a-8a03-717e69810056/openstack-network-exporter/0.log" Nov 25 12:49:13 crc kubenswrapper[4706]: I1125 12:49:13.847488 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_752cf7db-684f-4a5a-8a03-717e69810056/ovsdbserver-sb/0.log" Nov 25 12:49:13 crc kubenswrapper[4706]: I1125 12:49:13.950553 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5bfcb97b8-lmwjc_2dab0780-5792-4f20-9553-a780aa94ebba/placement-api/0.log" Nov 25 12:49:14 crc kubenswrapper[4706]: I1125 12:49:14.082116 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6ea2e87f-dc81-49cc-81a8-e08a8ed11f12/setup-container/0.log" Nov 25 12:49:14 crc kubenswrapper[4706]: I1125 12:49:14.085066 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5bfcb97b8-lmwjc_2dab0780-5792-4f20-9553-a780aa94ebba/placement-log/0.log" Nov 25 12:49:14 crc kubenswrapper[4706]: I1125 12:49:14.261813 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6ea2e87f-dc81-49cc-81a8-e08a8ed11f12/setup-container/0.log" Nov 25 12:49:14 crc kubenswrapper[4706]: I1125 12:49:14.283273 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6ea2e87f-dc81-49cc-81a8-e08a8ed11f12/rabbitmq/0.log" Nov 25 12:49:14 crc kubenswrapper[4706]: I1125 12:49:14.356891 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a9a6207a-78de-492d-8c88-9a1d2a6f703d/setup-container/0.log" Nov 25 12:49:15 crc kubenswrapper[4706]: I1125 12:49:15.096796 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a9a6207a-78de-492d-8c88-9a1d2a6f703d/setup-container/0.log" Nov 25 12:49:15 crc kubenswrapper[4706]: I1125 12:49:15.111324 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a9a6207a-78de-492d-8c88-9a1d2a6f703d/rabbitmq/0.log" Nov 25 12:49:15 crc kubenswrapper[4706]: I1125 12:49:15.120278 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-29gdm_9357f592-809a-450b-b052-fbb438c6d98f/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 12:49:15 crc kubenswrapper[4706]: I1125 12:49:15.284950 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-qn78f_b86d7293-ea09-42c5-948d-27c51a31d886/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 12:49:15 crc kubenswrapper[4706]: I1125 12:49:15.390345 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-mtxnw_e0e1584c-f1bf-45e7-ac6c-2768ffc5c1c3/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 12:49:15 crc kubenswrapper[4706]: I1125 12:49:15.615674 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-4j6mw_2976f69c-c134-429f-98c4-f7d54d9245b1/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 12:49:15 crc kubenswrapper[4706]: I1125 12:49:15.662208 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-d2qht_ab590c42-c26e-49b8-8fd1-e1c535dd7e8c/ssh-known-hosts-edpm-deployment/0.log" Nov 25 12:49:15 crc kubenswrapper[4706]: I1125 12:49:15.910635 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-65d9589979-xw964_64d9e8db-d554-4623-9a76-719df27fffef/proxy-server/0.log" Nov 25 12:49:15 crc kubenswrapper[4706]: I1125 12:49:15.997595 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-65d9589979-xw964_64d9e8db-d554-4623-9a76-719df27fffef/proxy-httpd/0.log" Nov 25 12:49:16 crc kubenswrapper[4706]: I1125 12:49:16.070531 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-ww65d_687ee889-8ec7-4754-b45f-b0f087368a37/swift-ring-rebalance/0.log" Nov 25 12:49:16 crc kubenswrapper[4706]: I1125 12:49:16.196063 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9225b01e-1067-47de-812a-d9be36adf9d0/account-reaper/0.log" Nov 25 12:49:16 crc kubenswrapper[4706]: I1125 12:49:16.280562 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9225b01e-1067-47de-812a-d9be36adf9d0/account-auditor/0.log" Nov 25 12:49:16 crc kubenswrapper[4706]: I1125 12:49:16.371094 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9225b01e-1067-47de-812a-d9be36adf9d0/account-replicator/0.log" Nov 25 12:49:16 crc kubenswrapper[4706]: I1125 12:49:16.390209 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9225b01e-1067-47de-812a-d9be36adf9d0/account-server/0.log" Nov 25 12:49:16 crc kubenswrapper[4706]: I1125 12:49:16.461021 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9225b01e-1067-47de-812a-d9be36adf9d0/container-auditor/0.log" Nov 25 12:49:16 crc kubenswrapper[4706]: I1125 12:49:16.538157 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9225b01e-1067-47de-812a-d9be36adf9d0/container-replicator/0.log" Nov 25 12:49:16 crc kubenswrapper[4706]: I1125 12:49:16.621949 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9225b01e-1067-47de-812a-d9be36adf9d0/container-updater/0.log" Nov 25 12:49:16 crc kubenswrapper[4706]: I1125 12:49:16.630496 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9225b01e-1067-47de-812a-d9be36adf9d0/container-server/0.log" Nov 25 12:49:16 crc kubenswrapper[4706]: I1125 12:49:16.726657 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9225b01e-1067-47de-812a-d9be36adf9d0/object-auditor/0.log" Nov 25 12:49:16 crc kubenswrapper[4706]: I1125 12:49:16.785225 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9225b01e-1067-47de-812a-d9be36adf9d0/object-expirer/0.log" Nov 25 12:49:16 crc kubenswrapper[4706]: I1125 12:49:16.837568 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9225b01e-1067-47de-812a-d9be36adf9d0/object-replicator/0.log" Nov 25 12:49:16 crc kubenswrapper[4706]: I1125 12:49:16.866705 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9225b01e-1067-47de-812a-d9be36adf9d0/object-server/0.log" Nov 25 12:49:16 crc kubenswrapper[4706]: I1125 12:49:16.942563 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9225b01e-1067-47de-812a-d9be36adf9d0/object-updater/0.log" Nov 25 12:49:16 crc kubenswrapper[4706]: I1125 12:49:16.999292 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9225b01e-1067-47de-812a-d9be36adf9d0/rsync/0.log" Nov 25 12:49:17 crc kubenswrapper[4706]: I1125 12:49:17.051027 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9225b01e-1067-47de-812a-d9be36adf9d0/swift-recon-cron/0.log" Nov 25 12:49:17 crc kubenswrapper[4706]: I1125 12:49:17.225854 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-rtmfj_10becdf1-f704-46ec-aee6-b4ef4fdbed09/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 12:49:17 crc kubenswrapper[4706]: I1125 12:49:17.342214 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_a3e38444-7907-4d48-bc07-b6b7dc4854a8/tempest-tests-tempest-tests-runner/0.log" Nov 25 12:49:17 crc kubenswrapper[4706]: I1125 12:49:17.451582 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_586b9083-1af0-4687-886b-bdaf4041ba31/test-operator-logs-container/0.log" Nov 25 12:49:17 crc kubenswrapper[4706]: I1125 12:49:17.559373 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-2j66d_29e15319-39a4-4af6-869c-3f49b55997bc/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 12:49:28 crc kubenswrapper[4706]: I1125 12:49:28.842662 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_37118d82-a55d-4a10-8b2c-6e5cf036474c/memcached/0.log" Nov 25 12:49:31 crc kubenswrapper[4706]: I1125 12:49:31.125412 4706 patch_prober.go:28] interesting pod/machine-config-daemon-dhfpm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:49:31 crc kubenswrapper[4706]: I1125 12:49:31.125726 4706 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:49:31 crc kubenswrapper[4706]: I1125 12:49:31.125768 4706 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" Nov 25 12:49:31 crc kubenswrapper[4706]: I1125 12:49:31.126280 4706 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"26d31244857a0be0aea5023b5f648b4e573312d8ff6419d5d6b048bd70f84083"} pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 12:49:31 crc kubenswrapper[4706]: I1125 12:49:31.126363 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" containerName="machine-config-daemon" containerID="cri-o://26d31244857a0be0aea5023b5f648b4e573312d8ff6419d5d6b048bd70f84083" gracePeriod=600 Nov 25 12:49:31 crc kubenswrapper[4706]: E1125 12:49:31.259142 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:49:31 crc kubenswrapper[4706]: I1125 12:49:31.394074 4706 generic.go:334] "Generic (PLEG): container finished" podID="0930887a-320c-4506-8c9c-f94d6d64516a" containerID="26d31244857a0be0aea5023b5f648b4e573312d8ff6419d5d6b048bd70f84083" exitCode=0 Nov 25 12:49:31 crc kubenswrapper[4706]: I1125 12:49:31.394124 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" event={"ID":"0930887a-320c-4506-8c9c-f94d6d64516a","Type":"ContainerDied","Data":"26d31244857a0be0aea5023b5f648b4e573312d8ff6419d5d6b048bd70f84083"} Nov 25 12:49:31 crc kubenswrapper[4706]: I1125 12:49:31.394160 4706 scope.go:117] "RemoveContainer" containerID="ec124d7ca75771b4c4c8fe512ca2efc5a14229d016e5175e85c0e297e332d27e" Nov 25 12:49:31 crc kubenswrapper[4706]: I1125 12:49:31.394820 4706 scope.go:117] "RemoveContainer" containerID="26d31244857a0be0aea5023b5f648b4e573312d8ff6419d5d6b048bd70f84083" Nov 25 12:49:31 crc kubenswrapper[4706]: E1125 12:49:31.395154 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:49:42 crc kubenswrapper[4706]: I1125 12:49:42.922173 4706 scope.go:117] "RemoveContainer" containerID="26d31244857a0be0aea5023b5f648b4e573312d8ff6419d5d6b048bd70f84083" Nov 25 12:49:42 crc kubenswrapper[4706]: E1125 12:49:42.922918 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:49:45 crc kubenswrapper[4706]: I1125 12:49:45.172399 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv_787337fb-0b33-488b-a1b5-c680273f2c5b/util/0.log" Nov 25 12:49:45 crc kubenswrapper[4706]: I1125 12:49:45.371112 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv_787337fb-0b33-488b-a1b5-c680273f2c5b/util/0.log" Nov 25 12:49:45 crc kubenswrapper[4706]: I1125 12:49:45.381936 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv_787337fb-0b33-488b-a1b5-c680273f2c5b/pull/0.log" Nov 25 12:49:45 crc kubenswrapper[4706]: I1125 12:49:45.434113 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv_787337fb-0b33-488b-a1b5-c680273f2c5b/pull/0.log" Nov 25 12:49:45 crc kubenswrapper[4706]: I1125 12:49:45.584444 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv_787337fb-0b33-488b-a1b5-c680273f2c5b/util/0.log" Nov 25 12:49:45 crc kubenswrapper[4706]: I1125 12:49:45.608821 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv_787337fb-0b33-488b-a1b5-c680273f2c5b/extract/0.log" Nov 25 12:49:45 crc kubenswrapper[4706]: I1125 12:49:45.609279 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6cf372469a5f9156fbb7e5b80b05d9810593b0772b02df8e6f722f5cd17d8fv_787337fb-0b33-488b-a1b5-c680273f2c5b/pull/0.log" Nov 25 12:49:45 crc kubenswrapper[4706]: I1125 12:49:45.762114 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-jh5hc_23155e14-a775-48c5-adf9-55dcfd008040/kube-rbac-proxy/0.log" Nov 25 12:49:45 crc kubenswrapper[4706]: I1125 12:49:45.791006 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-jh5hc_23155e14-a775-48c5-adf9-55dcfd008040/manager/1.log" Nov 25 12:49:45 crc kubenswrapper[4706]: I1125 12:49:45.798027 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-jh5hc_23155e14-a775-48c5-adf9-55dcfd008040/manager/2.log" Nov 25 12:49:45 crc kubenswrapper[4706]: I1125 12:49:45.966758 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-4bsmv_ee655c82-6748-4bba-9da4-dcf73e0cff37/manager/1.log" Nov 25 12:49:45 crc kubenswrapper[4706]: I1125 12:49:45.972339 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-4bsmv_ee655c82-6748-4bba-9da4-dcf73e0cff37/kube-rbac-proxy/0.log" Nov 25 12:49:45 crc kubenswrapper[4706]: I1125 12:49:45.974153 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-4bsmv_ee655c82-6748-4bba-9da4-dcf73e0cff37/manager/2.log" Nov 25 12:49:46 crc kubenswrapper[4706]: I1125 12:49:46.145460 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-hqsp5_9fa65252-7bf5-4e83-beb7-dfcfa63db10d/kube-rbac-proxy/0.log" Nov 25 12:49:46 crc kubenswrapper[4706]: I1125 12:49:46.165577 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-hqsp5_9fa65252-7bf5-4e83-beb7-dfcfa63db10d/manager/2.log" Nov 25 12:49:46 crc kubenswrapper[4706]: I1125 12:49:46.178512 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-hqsp5_9fa65252-7bf5-4e83-beb7-dfcfa63db10d/manager/1.log" Nov 25 12:49:46 crc kubenswrapper[4706]: I1125 12:49:46.354025 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-t6c78_4857e509-acac-422c-87e8-2662708da599/manager/2.log" Nov 25 12:49:46 crc kubenswrapper[4706]: I1125 12:49:46.368277 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-t6c78_4857e509-acac-422c-87e8-2662708da599/kube-rbac-proxy/0.log" Nov 25 12:49:46 crc kubenswrapper[4706]: I1125 12:49:46.404023 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-t6c78_4857e509-acac-422c-87e8-2662708da599/manager/1.log" Nov 25 12:49:46 crc kubenswrapper[4706]: I1125 12:49:46.576270 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-9bz4f_c6de3b19-c207-4c00-8350-de810fb1f555/manager/2.log" Nov 25 12:49:46 crc kubenswrapper[4706]: I1125 12:49:46.605661 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-9bz4f_c6de3b19-c207-4c00-8350-de810fb1f555/kube-rbac-proxy/0.log" Nov 25 12:49:46 crc kubenswrapper[4706]: I1125 12:49:46.617992 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-9bz4f_c6de3b19-c207-4c00-8350-de810fb1f555/manager/1.log" Nov 25 12:49:46 crc kubenswrapper[4706]: I1125 12:49:46.774609 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-zx4v6_72bbe536-121d-47c0-b473-2974b238f271/kube-rbac-proxy/0.log" Nov 25 12:49:46 crc kubenswrapper[4706]: I1125 12:49:46.813710 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-zx4v6_72bbe536-121d-47c0-b473-2974b238f271/manager/1.log" Nov 25 12:49:46 crc kubenswrapper[4706]: I1125 12:49:46.814362 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-zx4v6_72bbe536-121d-47c0-b473-2974b238f271/manager/2.log" Nov 25 12:49:47 crc kubenswrapper[4706]: I1125 12:49:47.042347 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-rfz7f_e204aa88-c108-491e-9a73-2fca5c2ef15c/kube-rbac-proxy/0.log" Nov 25 12:49:47 crc kubenswrapper[4706]: I1125 12:49:47.086902 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-rfz7f_e204aa88-c108-491e-9a73-2fca5c2ef15c/manager/1.log" Nov 25 12:49:47 crc kubenswrapper[4706]: I1125 12:49:47.100603 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-rfz7f_e204aa88-c108-491e-9a73-2fca5c2ef15c/manager/2.log" Nov 25 12:49:47 crc kubenswrapper[4706]: I1125 12:49:47.303008 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-l4m6r_9e5a3424-dd89-4411-872f-70447506cf73/manager/2.log" Nov 25 12:49:47 crc kubenswrapper[4706]: I1125 12:49:47.305389 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-l4m6r_9e5a3424-dd89-4411-872f-70447506cf73/kube-rbac-proxy/0.log" Nov 25 12:49:47 crc kubenswrapper[4706]: I1125 12:49:47.388719 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-l4m6r_9e5a3424-dd89-4411-872f-70447506cf73/manager/1.log" Nov 25 12:49:47 crc kubenswrapper[4706]: I1125 12:49:47.512164 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-nf6gr_6c41fff9-feeb-4311-a7ce-7da3a71b3e9c/kube-rbac-proxy/0.log" Nov 25 12:49:47 crc kubenswrapper[4706]: I1125 12:49:47.573886 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-nf6gr_6c41fff9-feeb-4311-a7ce-7da3a71b3e9c/manager/2.log" Nov 25 12:49:47 crc kubenswrapper[4706]: I1125 12:49:47.605768 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-nf6gr_6c41fff9-feeb-4311-a7ce-7da3a71b3e9c/manager/1.log" Nov 25 12:49:47 crc kubenswrapper[4706]: I1125 12:49:47.712863 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-fslzs_70fa0d16-065a-463f-8198-06a03414a128/kube-rbac-proxy/0.log" Nov 25 12:49:47 crc kubenswrapper[4706]: I1125 12:49:47.809929 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-fslzs_70fa0d16-065a-463f-8198-06a03414a128/manager/2.log" Nov 25 12:49:47 crc kubenswrapper[4706]: I1125 12:49:47.962791 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-fslzs_70fa0d16-065a-463f-8198-06a03414a128/manager/1.log" Nov 25 12:49:48 crc kubenswrapper[4706]: I1125 12:49:48.124484 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-bpcjw_62e72e86-38e3-4acc-8aa1-664684f27760/kube-rbac-proxy/0.log" Nov 25 12:49:48 crc kubenswrapper[4706]: I1125 12:49:48.201658 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-bpcjw_62e72e86-38e3-4acc-8aa1-664684f27760/manager/1.log" Nov 25 12:49:48 crc kubenswrapper[4706]: I1125 12:49:48.207703 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-bpcjw_62e72e86-38e3-4acc-8aa1-664684f27760/manager/2.log" Nov 25 12:49:48 crc kubenswrapper[4706]: I1125 12:49:48.276313 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-tfn29_3c582966-ab32-499d-8f1c-95c942dd6bb4/kube-rbac-proxy/0.log" Nov 25 12:49:48 crc kubenswrapper[4706]: I1125 12:49:48.308627 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-tfn29_3c582966-ab32-499d-8f1c-95c942dd6bb4/manager/2.log" Nov 25 12:49:48 crc kubenswrapper[4706]: I1125 12:49:48.377787 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-tfn29_3c582966-ab32-499d-8f1c-95c942dd6bb4/manager/1.log" Nov 25 12:49:48 crc kubenswrapper[4706]: I1125 12:49:48.484047 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-f47gl_1c035858-a349-4415-8a5d-f3f2edb7c84e/kube-rbac-proxy/0.log" Nov 25 12:49:48 crc kubenswrapper[4706]: I1125 12:49:48.495025 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-f47gl_1c035858-a349-4415-8a5d-f3f2edb7c84e/manager/1.log" Nov 25 12:49:48 crc kubenswrapper[4706]: I1125 12:49:48.506042 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-f47gl_1c035858-a349-4415-8a5d-f3f2edb7c84e/manager/2.log" Nov 25 12:49:48 crc kubenswrapper[4706]: I1125 12:49:48.657070 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-2tmzq_063b2f44-faa1-4a58-b77b-f2140f569b01/kube-rbac-proxy/0.log" Nov 25 12:49:48 crc kubenswrapper[4706]: I1125 12:49:48.673980 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-2tmzq_063b2f44-faa1-4a58-b77b-f2140f569b01/manager/1.log" Nov 25 12:49:48 crc kubenswrapper[4706]: I1125 12:49:48.688724 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-2tmzq_063b2f44-faa1-4a58-b77b-f2140f569b01/manager/2.log" Nov 25 12:49:48 crc kubenswrapper[4706]: I1125 12:49:48.841134 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk_e318ee27-6b61-4c03-b697-782b25461b09/kube-rbac-proxy/0.log" Nov 25 12:49:48 crc kubenswrapper[4706]: I1125 12:49:48.870011 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk_e318ee27-6b61-4c03-b697-782b25461b09/manager/0.log" Nov 25 12:49:48 crc kubenswrapper[4706]: I1125 12:49:48.904220 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-qg7kk_e318ee27-6b61-4c03-b697-782b25461b09/manager/1.log" Nov 25 12:49:49 crc kubenswrapper[4706]: I1125 12:49:49.099170 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-9cb9fb586-5854z_2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1/manager/1.log" Nov 25 12:49:49 crc kubenswrapper[4706]: I1125 12:49:49.132587 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5789f9b844-cfvkd_2df5f121-0564-4647-acf6-d09283ff5a94/operator/1.log" Nov 25 12:49:49 crc kubenswrapper[4706]: I1125 12:49:49.243799 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-9cb9fb586-5854z_2a90e9e4-814b-4c09-a6d3-f7ad3792f6b1/manager/2.log" Nov 25 12:49:49 crc kubenswrapper[4706]: I1125 12:49:49.339372 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5789f9b844-cfvkd_2df5f121-0564-4647-acf6-d09283ff5a94/operator/0.log" Nov 25 12:49:49 crc kubenswrapper[4706]: I1125 12:49:49.346887 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-g64cw_fa3da9d1-2214-4436-951b-2f2ec4c05104/registry-server/0.log" Nov 25 12:49:49 crc kubenswrapper[4706]: I1125 12:49:49.447925 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-nc6f7_61b1ec50-3228-43bc-bb09-d74a7f02be52/kube-rbac-proxy/0.log" Nov 25 12:49:49 crc kubenswrapper[4706]: I1125 12:49:49.541353 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-nc6f7_61b1ec50-3228-43bc-bb09-d74a7f02be52/manager/2.log" Nov 25 12:49:49 crc kubenswrapper[4706]: I1125 12:49:49.574762 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-nc6f7_61b1ec50-3228-43bc-bb09-d74a7f02be52/manager/1.log" Nov 25 12:49:49 crc kubenswrapper[4706]: I1125 12:49:49.649812 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-k7crl_eab1279c-c99a-450e-887b-d246a2ff01aa/kube-rbac-proxy/0.log" Nov 25 12:49:49 crc kubenswrapper[4706]: I1125 12:49:49.673496 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-k7crl_eab1279c-c99a-450e-887b-d246a2ff01aa/manager/2.log" Nov 25 12:49:49 crc kubenswrapper[4706]: I1125 12:49:49.747746 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-k7crl_eab1279c-c99a-450e-887b-d246a2ff01aa/manager/1.log" Nov 25 12:49:49 crc kubenswrapper[4706]: I1125 12:49:49.814115 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-x9x4q_5726a389-32eb-4f0c-938b-6f2ddbb762e7/operator/2.log" Nov 25 12:49:49 crc kubenswrapper[4706]: I1125 12:49:49.924448 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-x9x4q_5726a389-32eb-4f0c-938b-6f2ddbb762e7/operator/1.log" Nov 25 12:49:49 crc kubenswrapper[4706]: I1125 12:49:49.936589 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-rwbvj_a0668604-b184-4265-b9af-fc6f526d8351/manager/2.log" Nov 25 12:49:49 crc kubenswrapper[4706]: I1125 12:49:49.938714 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-rwbvj_a0668604-b184-4265-b9af-fc6f526d8351/kube-rbac-proxy/0.log" Nov 25 12:49:50 crc kubenswrapper[4706]: I1125 12:49:50.050235 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-rwbvj_a0668604-b184-4265-b9af-fc6f526d8351/manager/1.log" Nov 25 12:49:50 crc kubenswrapper[4706]: I1125 12:49:50.125957 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-8p5t2_a7a52f28-6bc4-481d-8513-16dbb7b37ae1/kube-rbac-proxy/0.log" Nov 25 12:49:50 crc kubenswrapper[4706]: I1125 12:49:50.149423 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-8p5t2_a7a52f28-6bc4-481d-8513-16dbb7b37ae1/manager/2.log" Nov 25 12:49:50 crc kubenswrapper[4706]: I1125 12:49:50.182008 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-8p5t2_a7a52f28-6bc4-481d-8513-16dbb7b37ae1/manager/1.log" Nov 25 12:49:50 crc kubenswrapper[4706]: I1125 12:49:50.312914 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-8rlr7_d256078e-afd5-4218-ad5c-d5211eb846a8/manager/1.log" Nov 25 12:49:50 crc kubenswrapper[4706]: I1125 12:49:50.328731 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-8rlr7_d256078e-afd5-4218-ad5c-d5211eb846a8/kube-rbac-proxy/0.log" Nov 25 12:49:50 crc kubenswrapper[4706]: I1125 12:49:50.348258 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-8rlr7_d256078e-afd5-4218-ad5c-d5211eb846a8/manager/0.log" Nov 25 12:49:50 crc kubenswrapper[4706]: I1125 12:49:50.463993 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-9s7hm_6b8e15c0-a70f-4b4c-8836-a2c4e7b23f60/kube-rbac-proxy/0.log" Nov 25 12:49:50 crc kubenswrapper[4706]: I1125 12:49:50.490737 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-9s7hm_6b8e15c0-a70f-4b4c-8836-a2c4e7b23f60/manager/2.log" Nov 25 12:49:50 crc kubenswrapper[4706]: I1125 12:49:50.528028 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-9s7hm_6b8e15c0-a70f-4b4c-8836-a2c4e7b23f60/manager/1.log" Nov 25 12:49:57 crc kubenswrapper[4706]: I1125 12:49:57.922509 4706 scope.go:117] "RemoveContainer" containerID="26d31244857a0be0aea5023b5f648b4e573312d8ff6419d5d6b048bd70f84083" Nov 25 12:49:57 crc kubenswrapper[4706]: E1125 12:49:57.923428 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:50:09 crc kubenswrapper[4706]: I1125 12:50:09.637217 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-hhh7q_825f088d-44aa-4f48-b95d-6245da5b1775/control-plane-machine-set-operator/0.log" Nov 25 12:50:09 crc kubenswrapper[4706]: I1125 12:50:09.835719 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-9z28x_ab2dd029-844e-4783-8fda-bfab6a6d9243/kube-rbac-proxy/0.log" Nov 25 12:50:09 crc kubenswrapper[4706]: I1125 12:50:09.843040 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-9z28x_ab2dd029-844e-4783-8fda-bfab6a6d9243/machine-api-operator/0.log" Nov 25 12:50:12 crc kubenswrapper[4706]: I1125 12:50:12.922598 4706 scope.go:117] "RemoveContainer" containerID="26d31244857a0be0aea5023b5f648b4e573312d8ff6419d5d6b048bd70f84083" Nov 25 12:50:12 crc kubenswrapper[4706]: E1125 12:50:12.923429 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:50:22 crc kubenswrapper[4706]: I1125 12:50:22.771056 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-qv4vk_a9733b54-d1c6-48b7-9e7f-4c09ed97b604/cert-manager-controller/0.log" Nov 25 12:50:22 crc kubenswrapper[4706]: I1125 12:50:22.902424 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-8qfjm_96496646-6a16-483a-a71d-c6debd0e44d7/cert-manager-cainjector/0.log" Nov 25 12:50:22 crc kubenswrapper[4706]: I1125 12:50:22.971182 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-bk58z_3a171d39-2023-41e0-b928-710c5b9eff19/cert-manager-webhook/0.log" Nov 25 12:50:24 crc kubenswrapper[4706]: I1125 12:50:24.922741 4706 scope.go:117] "RemoveContainer" containerID="26d31244857a0be0aea5023b5f648b4e573312d8ff6419d5d6b048bd70f84083" Nov 25 12:50:24 crc kubenswrapper[4706]: E1125 12:50:24.923340 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:50:34 crc kubenswrapper[4706]: I1125 12:50:34.754461 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-4k4ff_502cb16b-4f8d-47ba-96a0-41e42768fe63/nmstate-console-plugin/0.log" Nov 25 12:50:34 crc kubenswrapper[4706]: I1125 12:50:34.940927 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-qkksf_2454859f-90ab-4942-a300-36e465597289/nmstate-handler/0.log" Nov 25 12:50:34 crc kubenswrapper[4706]: I1125 12:50:34.982628 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-rd4nq_a206555f-6ea8-4dbc-83db-801c57226c13/kube-rbac-proxy/0.log" Nov 25 12:50:34 crc kubenswrapper[4706]: I1125 12:50:34.994332 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-rd4nq_a206555f-6ea8-4dbc-83db-801c57226c13/nmstate-metrics/0.log" Nov 25 12:50:35 crc kubenswrapper[4706]: I1125 12:50:35.182323 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-4wx96_e4a0ddea-a6b5-456d-9243-3a7576fcdac8/nmstate-operator/0.log" Nov 25 12:50:35 crc kubenswrapper[4706]: I1125 12:50:35.192061 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-k7vl7_9220b323-ff51-4a2d-95fc-dc3274e8fbeb/nmstate-webhook/0.log" Nov 25 12:50:37 crc kubenswrapper[4706]: I1125 12:50:37.922465 4706 scope.go:117] "RemoveContainer" containerID="26d31244857a0be0aea5023b5f648b4e573312d8ff6419d5d6b048bd70f84083" Nov 25 12:50:37 crc kubenswrapper[4706]: E1125 12:50:37.923057 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:50:49 crc kubenswrapper[4706]: I1125 12:50:49.197212 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-5gnwd_67dd43bc-7fe1-4585-8fc3-2d2a52b8c974/kube-rbac-proxy/0.log" Nov 25 12:50:49 crc kubenswrapper[4706]: I1125 12:50:49.360317 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-5gnwd_67dd43bc-7fe1-4585-8fc3-2d2a52b8c974/controller/0.log" Nov 25 12:50:49 crc kubenswrapper[4706]: I1125 12:50:49.425418 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gfpwp_4fe1be78-8453-460d-abc1-7c4b89923fe5/cp-frr-files/0.log" Nov 25 12:50:49 crc kubenswrapper[4706]: I1125 12:50:49.576724 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gfpwp_4fe1be78-8453-460d-abc1-7c4b89923fe5/cp-frr-files/0.log" Nov 25 12:50:49 crc kubenswrapper[4706]: I1125 12:50:49.615480 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gfpwp_4fe1be78-8453-460d-abc1-7c4b89923fe5/cp-reloader/0.log" Nov 25 12:50:49 crc kubenswrapper[4706]: I1125 12:50:49.624358 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gfpwp_4fe1be78-8453-460d-abc1-7c4b89923fe5/cp-metrics/0.log" Nov 25 12:50:49 crc kubenswrapper[4706]: I1125 12:50:49.628937 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gfpwp_4fe1be78-8453-460d-abc1-7c4b89923fe5/cp-reloader/0.log" Nov 25 12:50:49 crc kubenswrapper[4706]: I1125 12:50:49.797951 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gfpwp_4fe1be78-8453-460d-abc1-7c4b89923fe5/cp-reloader/0.log" Nov 25 12:50:49 crc kubenswrapper[4706]: I1125 12:50:49.797970 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gfpwp_4fe1be78-8453-460d-abc1-7c4b89923fe5/cp-metrics/0.log" Nov 25 12:50:49 crc kubenswrapper[4706]: I1125 12:50:49.804795 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gfpwp_4fe1be78-8453-460d-abc1-7c4b89923fe5/cp-frr-files/0.log" Nov 25 12:50:49 crc kubenswrapper[4706]: I1125 12:50:49.867492 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gfpwp_4fe1be78-8453-460d-abc1-7c4b89923fe5/cp-metrics/0.log" Nov 25 12:50:50 crc kubenswrapper[4706]: I1125 12:50:50.027772 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gfpwp_4fe1be78-8453-460d-abc1-7c4b89923fe5/cp-frr-files/0.log" Nov 25 12:50:50 crc kubenswrapper[4706]: I1125 12:50:50.044007 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gfpwp_4fe1be78-8453-460d-abc1-7c4b89923fe5/cp-metrics/0.log" Nov 25 12:50:50 crc kubenswrapper[4706]: I1125 12:50:50.049727 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gfpwp_4fe1be78-8453-460d-abc1-7c4b89923fe5/cp-reloader/0.log" Nov 25 12:50:50 crc kubenswrapper[4706]: I1125 12:50:50.123468 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gfpwp_4fe1be78-8453-460d-abc1-7c4b89923fe5/controller/0.log" Nov 25 12:50:50 crc kubenswrapper[4706]: I1125 12:50:50.280288 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gfpwp_4fe1be78-8453-460d-abc1-7c4b89923fe5/frr-metrics/0.log" Nov 25 12:50:50 crc kubenswrapper[4706]: I1125 12:50:50.281979 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gfpwp_4fe1be78-8453-460d-abc1-7c4b89923fe5/kube-rbac-proxy/0.log" Nov 25 12:50:50 crc kubenswrapper[4706]: I1125 12:50:50.363078 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gfpwp_4fe1be78-8453-460d-abc1-7c4b89923fe5/kube-rbac-proxy-frr/0.log" Nov 25 12:50:50 crc kubenswrapper[4706]: I1125 12:50:50.466761 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gfpwp_4fe1be78-8453-460d-abc1-7c4b89923fe5/reloader/0.log" Nov 25 12:50:50 crc kubenswrapper[4706]: I1125 12:50:50.621509 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-9gk5w_d6a1f7a2-b220-49a7-b12a-8cc3cf093dbc/frr-k8s-webhook-server/0.log" Nov 25 12:50:50 crc kubenswrapper[4706]: I1125 12:50:50.792920 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7d76b4f6c7-xxkgj_cdb2d830-fbc9-4336-83b7-0392051670cb/manager/3.log" Nov 25 12:50:50 crc kubenswrapper[4706]: I1125 12:50:50.853193 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7d76b4f6c7-xxkgj_cdb2d830-fbc9-4336-83b7-0392051670cb/manager/2.log" Nov 25 12:50:50 crc kubenswrapper[4706]: I1125 12:50:50.922741 4706 scope.go:117] "RemoveContainer" containerID="26d31244857a0be0aea5023b5f648b4e573312d8ff6419d5d6b048bd70f84083" Nov 25 12:50:50 crc kubenswrapper[4706]: E1125 12:50:50.941538 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:50:51 crc kubenswrapper[4706]: I1125 12:50:51.045230 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7c9ff6b49c-x86mq_2cb3fa9d-f614-42af-80c5-deb2e1fdb90d/webhook-server/0.log" Nov 25 12:50:51 crc kubenswrapper[4706]: I1125 12:50:51.147154 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2w52p_5570c11b-30c6-4ba6-adb5-3fc12ca26ae9/kube-rbac-proxy/0.log" Nov 25 12:50:51 crc kubenswrapper[4706]: I1125 12:50:51.844606 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2w52p_5570c11b-30c6-4ba6-adb5-3fc12ca26ae9/speaker/0.log" Nov 25 12:50:51 crc kubenswrapper[4706]: I1125 12:50:51.967047 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gfpwp_4fe1be78-8453-460d-abc1-7c4b89923fe5/frr/0.log" Nov 25 12:51:03 crc kubenswrapper[4706]: I1125 12:51:03.922498 4706 scope.go:117] "RemoveContainer" containerID="26d31244857a0be0aea5023b5f648b4e573312d8ff6419d5d6b048bd70f84083" Nov 25 12:51:03 crc kubenswrapper[4706]: E1125 12:51:03.923316 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:51:04 crc kubenswrapper[4706]: I1125 12:51:04.128604 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc_05fa0078-a8e0-4b75-a7a8-d5ec5f21e034/util/0.log" Nov 25 12:51:04 crc kubenswrapper[4706]: I1125 12:51:04.355113 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc_05fa0078-a8e0-4b75-a7a8-d5ec5f21e034/pull/0.log" Nov 25 12:51:04 crc kubenswrapper[4706]: I1125 12:51:04.365249 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc_05fa0078-a8e0-4b75-a7a8-d5ec5f21e034/util/0.log" Nov 25 12:51:04 crc kubenswrapper[4706]: I1125 12:51:04.391240 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc_05fa0078-a8e0-4b75-a7a8-d5ec5f21e034/pull/0.log" Nov 25 12:51:04 crc kubenswrapper[4706]: I1125 12:51:04.562800 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc_05fa0078-a8e0-4b75-a7a8-d5ec5f21e034/extract/0.log" Nov 25 12:51:04 crc kubenswrapper[4706]: I1125 12:51:04.566261 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc_05fa0078-a8e0-4b75-a7a8-d5ec5f21e034/util/0.log" Nov 25 12:51:04 crc kubenswrapper[4706]: I1125 12:51:04.616939 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewtpqc_05fa0078-a8e0-4b75-a7a8-d5ec5f21e034/pull/0.log" Nov 25 12:51:04 crc kubenswrapper[4706]: I1125 12:51:04.718700 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-k7lhm_f25c7d8b-b341-4fb2-bef0-e43d83905a9b/extract-utilities/0.log" Nov 25 12:51:04 crc kubenswrapper[4706]: I1125 12:51:04.912144 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-k7lhm_f25c7d8b-b341-4fb2-bef0-e43d83905a9b/extract-content/0.log" Nov 25 12:51:04 crc kubenswrapper[4706]: I1125 12:51:04.923578 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-k7lhm_f25c7d8b-b341-4fb2-bef0-e43d83905a9b/extract-utilities/0.log" Nov 25 12:51:04 crc kubenswrapper[4706]: I1125 12:51:04.954709 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-k7lhm_f25c7d8b-b341-4fb2-bef0-e43d83905a9b/extract-content/0.log" Nov 25 12:51:05 crc kubenswrapper[4706]: I1125 12:51:05.141117 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-k7lhm_f25c7d8b-b341-4fb2-bef0-e43d83905a9b/extract-utilities/0.log" Nov 25 12:51:05 crc kubenswrapper[4706]: I1125 12:51:05.162489 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-k7lhm_f25c7d8b-b341-4fb2-bef0-e43d83905a9b/extract-content/0.log" Nov 25 12:51:05 crc kubenswrapper[4706]: I1125 12:51:05.411663 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fq7cn_8e544967-24c9-4190-a1d7-5ed07fdaaeef/extract-utilities/0.log" Nov 25 12:51:05 crc kubenswrapper[4706]: I1125 12:51:05.585844 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fq7cn_8e544967-24c9-4190-a1d7-5ed07fdaaeef/extract-content/0.log" Nov 25 12:51:05 crc kubenswrapper[4706]: I1125 12:51:05.598645 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fq7cn_8e544967-24c9-4190-a1d7-5ed07fdaaeef/extract-utilities/0.log" Nov 25 12:51:05 crc kubenswrapper[4706]: I1125 12:51:05.682771 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-k7lhm_f25c7d8b-b341-4fb2-bef0-e43d83905a9b/registry-server/0.log" Nov 25 12:51:05 crc kubenswrapper[4706]: I1125 12:51:05.726273 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fq7cn_8e544967-24c9-4190-a1d7-5ed07fdaaeef/extract-content/0.log" Nov 25 12:51:05 crc kubenswrapper[4706]: I1125 12:51:05.811815 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fq7cn_8e544967-24c9-4190-a1d7-5ed07fdaaeef/extract-utilities/0.log" Nov 25 12:51:05 crc kubenswrapper[4706]: I1125 12:51:05.842181 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fq7cn_8e544967-24c9-4190-a1d7-5ed07fdaaeef/extract-content/0.log" Nov 25 12:51:06 crc kubenswrapper[4706]: I1125 12:51:06.049376 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn_8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532/util/0.log" Nov 25 12:51:06 crc kubenswrapper[4706]: I1125 12:51:06.266095 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn_8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532/util/0.log" Nov 25 12:51:06 crc kubenswrapper[4706]: I1125 12:51:06.324946 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn_8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532/pull/0.log" Nov 25 12:51:06 crc kubenswrapper[4706]: I1125 12:51:06.343222 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn_8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532/pull/0.log" Nov 25 12:51:06 crc kubenswrapper[4706]: I1125 12:51:06.569558 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn_8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532/extract/0.log" Nov 25 12:51:06 crc kubenswrapper[4706]: I1125 12:51:06.575963 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn_8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532/util/0.log" Nov 25 12:51:06 crc kubenswrapper[4706]: I1125 12:51:06.596831 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fq7cn_8e544967-24c9-4190-a1d7-5ed07fdaaeef/registry-server/0.log" Nov 25 12:51:06 crc kubenswrapper[4706]: I1125 12:51:06.611953 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6dm4cn_8c6ba0d0-db1d-4b2b-8c48-f3d9432a2532/pull/0.log" Nov 25 12:51:06 crc kubenswrapper[4706]: I1125 12:51:06.902019 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-vnd8s_57792378-6c0b-415c-aeb2-4cbb2c3c1702/marketplace-operator/0.log" Nov 25 12:51:06 crc kubenswrapper[4706]: I1125 12:51:06.923652 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q9pfj_ade36961-cf56-40fd-9d5b-202d3e937bfd/extract-utilities/0.log" Nov 25 12:51:07 crc kubenswrapper[4706]: I1125 12:51:07.091277 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q9pfj_ade36961-cf56-40fd-9d5b-202d3e937bfd/extract-utilities/0.log" Nov 25 12:51:07 crc kubenswrapper[4706]: I1125 12:51:07.158051 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q9pfj_ade36961-cf56-40fd-9d5b-202d3e937bfd/extract-content/0.log" Nov 25 12:51:07 crc kubenswrapper[4706]: I1125 12:51:07.166398 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q9pfj_ade36961-cf56-40fd-9d5b-202d3e937bfd/extract-content/0.log" Nov 25 12:51:07 crc kubenswrapper[4706]: I1125 12:51:07.299757 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q9pfj_ade36961-cf56-40fd-9d5b-202d3e937bfd/extract-utilities/0.log" Nov 25 12:51:07 crc kubenswrapper[4706]: I1125 12:51:07.324948 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q9pfj_ade36961-cf56-40fd-9d5b-202d3e937bfd/extract-content/0.log" Nov 25 12:51:07 crc kubenswrapper[4706]: I1125 12:51:07.543196 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q9pfj_ade36961-cf56-40fd-9d5b-202d3e937bfd/registry-server/0.log" Nov 25 12:51:07 crc kubenswrapper[4706]: I1125 12:51:07.549889 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hcv5z_3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9/extract-utilities/0.log" Nov 25 12:51:07 crc kubenswrapper[4706]: I1125 12:51:07.698478 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hcv5z_3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9/extract-content/0.log" Nov 25 12:51:07 crc kubenswrapper[4706]: I1125 12:51:07.710318 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hcv5z_3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9/extract-utilities/0.log" Nov 25 12:51:07 crc kubenswrapper[4706]: I1125 12:51:07.712973 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hcv5z_3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9/extract-content/0.log" Nov 25 12:51:07 crc kubenswrapper[4706]: I1125 12:51:07.862027 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hcv5z_3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9/extract-content/0.log" Nov 25 12:51:07 crc kubenswrapper[4706]: I1125 12:51:07.864851 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hcv5z_3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9/extract-utilities/0.log" Nov 25 12:51:08 crc kubenswrapper[4706]: I1125 12:51:08.342440 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hcv5z_3e0ba231-93b2-4bf1-9d67-66b3f2ee62b9/registry-server/0.log" Nov 25 12:51:15 crc kubenswrapper[4706]: I1125 12:51:15.922384 4706 scope.go:117] "RemoveContainer" containerID="26d31244857a0be0aea5023b5f648b4e573312d8ff6419d5d6b048bd70f84083" Nov 25 12:51:15 crc kubenswrapper[4706]: E1125 12:51:15.923316 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:51:27 crc kubenswrapper[4706]: I1125 12:51:27.922507 4706 scope.go:117] "RemoveContainer" containerID="26d31244857a0be0aea5023b5f648b4e573312d8ff6419d5d6b048bd70f84083" Nov 25 12:51:27 crc kubenswrapper[4706]: E1125 12:51:27.923394 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:51:41 crc kubenswrapper[4706]: I1125 12:51:41.931759 4706 scope.go:117] "RemoveContainer" containerID="26d31244857a0be0aea5023b5f648b4e573312d8ff6419d5d6b048bd70f84083" Nov 25 12:51:41 crc kubenswrapper[4706]: E1125 12:51:41.933573 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:51:56 crc kubenswrapper[4706]: I1125 12:51:56.922145 4706 scope.go:117] "RemoveContainer" containerID="26d31244857a0be0aea5023b5f648b4e573312d8ff6419d5d6b048bd70f84083" Nov 25 12:51:56 crc kubenswrapper[4706]: E1125 12:51:56.922830 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:52:08 crc kubenswrapper[4706]: I1125 12:52:08.922574 4706 scope.go:117] "RemoveContainer" containerID="26d31244857a0be0aea5023b5f648b4e573312d8ff6419d5d6b048bd70f84083" Nov 25 12:52:08 crc kubenswrapper[4706]: E1125 12:52:08.923337 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:52:22 crc kubenswrapper[4706]: I1125 12:52:22.922434 4706 scope.go:117] "RemoveContainer" containerID="26d31244857a0be0aea5023b5f648b4e573312d8ff6419d5d6b048bd70f84083" Nov 25 12:52:22 crc kubenswrapper[4706]: E1125 12:52:22.923230 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:52:36 crc kubenswrapper[4706]: I1125 12:52:36.922356 4706 scope.go:117] "RemoveContainer" containerID="26d31244857a0be0aea5023b5f648b4e573312d8ff6419d5d6b048bd70f84083" Nov 25 12:52:36 crc kubenswrapper[4706]: E1125 12:52:36.923045 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:52:47 crc kubenswrapper[4706]: I1125 12:52:47.927479 4706 scope.go:117] "RemoveContainer" containerID="26d31244857a0be0aea5023b5f648b4e573312d8ff6419d5d6b048bd70f84083" Nov 25 12:52:47 crc kubenswrapper[4706]: E1125 12:52:47.928220 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:52:55 crc kubenswrapper[4706]: I1125 12:52:55.462326 4706 generic.go:334] "Generic (PLEG): container finished" podID="f12cb3ac-00df-48d8-8a57-ab012d97d481" containerID="7c6f480730951901446414868a5e6fbce5374232af68b4256a939265e5a5377c" exitCode=0 Nov 25 12:52:55 crc kubenswrapper[4706]: I1125 12:52:55.462425 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gdrsm/must-gather-6mkxl" event={"ID":"f12cb3ac-00df-48d8-8a57-ab012d97d481","Type":"ContainerDied","Data":"7c6f480730951901446414868a5e6fbce5374232af68b4256a939265e5a5377c"} Nov 25 12:52:55 crc kubenswrapper[4706]: I1125 12:52:55.464446 4706 scope.go:117] "RemoveContainer" containerID="7c6f480730951901446414868a5e6fbce5374232af68b4256a939265e5a5377c" Nov 25 12:52:56 crc kubenswrapper[4706]: I1125 12:52:56.232462 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gdrsm_must-gather-6mkxl_f12cb3ac-00df-48d8-8a57-ab012d97d481/gather/0.log" Nov 25 12:52:59 crc kubenswrapper[4706]: I1125 12:52:59.842096 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g6m66"] Nov 25 12:52:59 crc kubenswrapper[4706]: E1125 12:52:59.843163 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="755fd1a7-2b9b-497f-af7d-81ff7b55bceb" containerName="extract-content" Nov 25 12:52:59 crc kubenswrapper[4706]: I1125 12:52:59.843179 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="755fd1a7-2b9b-497f-af7d-81ff7b55bceb" containerName="extract-content" Nov 25 12:52:59 crc kubenswrapper[4706]: E1125 12:52:59.843233 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="755fd1a7-2b9b-497f-af7d-81ff7b55bceb" containerName="registry-server" Nov 25 12:52:59 crc kubenswrapper[4706]: I1125 12:52:59.843241 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="755fd1a7-2b9b-497f-af7d-81ff7b55bceb" containerName="registry-server" Nov 25 12:52:59 crc kubenswrapper[4706]: E1125 12:52:59.843265 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="755fd1a7-2b9b-497f-af7d-81ff7b55bceb" containerName="extract-utilities" Nov 25 12:52:59 crc kubenswrapper[4706]: I1125 12:52:59.843390 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="755fd1a7-2b9b-497f-af7d-81ff7b55bceb" containerName="extract-utilities" Nov 25 12:52:59 crc kubenswrapper[4706]: I1125 12:52:59.843668 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="755fd1a7-2b9b-497f-af7d-81ff7b55bceb" containerName="registry-server" Nov 25 12:52:59 crc kubenswrapper[4706]: I1125 12:52:59.845881 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6m66" Nov 25 12:52:59 crc kubenswrapper[4706]: I1125 12:52:59.857499 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g6m66"] Nov 25 12:52:59 crc kubenswrapper[4706]: I1125 12:52:59.922440 4706 scope.go:117] "RemoveContainer" containerID="26d31244857a0be0aea5023b5f648b4e573312d8ff6419d5d6b048bd70f84083" Nov 25 12:52:59 crc kubenswrapper[4706]: E1125 12:52:59.922743 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:52:59 crc kubenswrapper[4706]: I1125 12:52:59.984771 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb31f3e9-2faa-49f7-9049-19ee1cabe5a1-catalog-content\") pod \"certified-operators-g6m66\" (UID: \"cb31f3e9-2faa-49f7-9049-19ee1cabe5a1\") " pod="openshift-marketplace/certified-operators-g6m66" Nov 25 12:52:59 crc kubenswrapper[4706]: I1125 12:52:59.984872 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtnzb\" (UniqueName: \"kubernetes.io/projected/cb31f3e9-2faa-49f7-9049-19ee1cabe5a1-kube-api-access-dtnzb\") pod \"certified-operators-g6m66\" (UID: \"cb31f3e9-2faa-49f7-9049-19ee1cabe5a1\") " pod="openshift-marketplace/certified-operators-g6m66" Nov 25 12:52:59 crc kubenswrapper[4706]: I1125 12:52:59.984903 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb31f3e9-2faa-49f7-9049-19ee1cabe5a1-utilities\") pod \"certified-operators-g6m66\" (UID: \"cb31f3e9-2faa-49f7-9049-19ee1cabe5a1\") " pod="openshift-marketplace/certified-operators-g6m66" Nov 25 12:53:00 crc kubenswrapper[4706]: I1125 12:53:00.086441 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb31f3e9-2faa-49f7-9049-19ee1cabe5a1-catalog-content\") pod \"certified-operators-g6m66\" (UID: \"cb31f3e9-2faa-49f7-9049-19ee1cabe5a1\") " pod="openshift-marketplace/certified-operators-g6m66" Nov 25 12:53:00 crc kubenswrapper[4706]: I1125 12:53:00.086569 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtnzb\" (UniqueName: \"kubernetes.io/projected/cb31f3e9-2faa-49f7-9049-19ee1cabe5a1-kube-api-access-dtnzb\") pod \"certified-operators-g6m66\" (UID: \"cb31f3e9-2faa-49f7-9049-19ee1cabe5a1\") " pod="openshift-marketplace/certified-operators-g6m66" Nov 25 12:53:00 crc kubenswrapper[4706]: I1125 12:53:00.086607 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb31f3e9-2faa-49f7-9049-19ee1cabe5a1-utilities\") pod \"certified-operators-g6m66\" (UID: \"cb31f3e9-2faa-49f7-9049-19ee1cabe5a1\") " pod="openshift-marketplace/certified-operators-g6m66" Nov 25 12:53:00 crc kubenswrapper[4706]: I1125 12:53:00.087034 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb31f3e9-2faa-49f7-9049-19ee1cabe5a1-catalog-content\") pod \"certified-operators-g6m66\" (UID: \"cb31f3e9-2faa-49f7-9049-19ee1cabe5a1\") " pod="openshift-marketplace/certified-operators-g6m66" Nov 25 12:53:00 crc kubenswrapper[4706]: I1125 12:53:00.087654 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb31f3e9-2faa-49f7-9049-19ee1cabe5a1-utilities\") pod \"certified-operators-g6m66\" (UID: \"cb31f3e9-2faa-49f7-9049-19ee1cabe5a1\") " pod="openshift-marketplace/certified-operators-g6m66" Nov 25 12:53:00 crc kubenswrapper[4706]: I1125 12:53:00.106677 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtnzb\" (UniqueName: \"kubernetes.io/projected/cb31f3e9-2faa-49f7-9049-19ee1cabe5a1-kube-api-access-dtnzb\") pod \"certified-operators-g6m66\" (UID: \"cb31f3e9-2faa-49f7-9049-19ee1cabe5a1\") " pod="openshift-marketplace/certified-operators-g6m66" Nov 25 12:53:00 crc kubenswrapper[4706]: I1125 12:53:00.183969 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6m66" Nov 25 12:53:00 crc kubenswrapper[4706]: I1125 12:53:00.764278 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g6m66"] Nov 25 12:53:01 crc kubenswrapper[4706]: I1125 12:53:01.521899 4706 generic.go:334] "Generic (PLEG): container finished" podID="cb31f3e9-2faa-49f7-9049-19ee1cabe5a1" containerID="8285802a8abdd0a8408164191848151cd42a40b6ccdc86c89cda82c5c34653c3" exitCode=0 Nov 25 12:53:01 crc kubenswrapper[4706]: I1125 12:53:01.521945 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6m66" event={"ID":"cb31f3e9-2faa-49f7-9049-19ee1cabe5a1","Type":"ContainerDied","Data":"8285802a8abdd0a8408164191848151cd42a40b6ccdc86c89cda82c5c34653c3"} Nov 25 12:53:01 crc kubenswrapper[4706]: I1125 12:53:01.522231 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6m66" event={"ID":"cb31f3e9-2faa-49f7-9049-19ee1cabe5a1","Type":"ContainerStarted","Data":"5af2a7af6bb05bdd22e41b86add9cb0bb4ea006293434e50f4e397a86ab052d8"} Nov 25 12:53:01 crc kubenswrapper[4706]: I1125 12:53:01.524149 4706 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 12:53:02 crc kubenswrapper[4706]: I1125 12:53:02.534968 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6m66" event={"ID":"cb31f3e9-2faa-49f7-9049-19ee1cabe5a1","Type":"ContainerStarted","Data":"23e2b62e81cca7a0b78fd925ce9d31d60cd207f332c9ca8d26ed19e5c864d995"} Nov 25 12:53:03 crc kubenswrapper[4706]: I1125 12:53:03.545975 4706 generic.go:334] "Generic (PLEG): container finished" podID="cb31f3e9-2faa-49f7-9049-19ee1cabe5a1" containerID="23e2b62e81cca7a0b78fd925ce9d31d60cd207f332c9ca8d26ed19e5c864d995" exitCode=0 Nov 25 12:53:03 crc kubenswrapper[4706]: I1125 12:53:03.546268 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6m66" event={"ID":"cb31f3e9-2faa-49f7-9049-19ee1cabe5a1","Type":"ContainerDied","Data":"23e2b62e81cca7a0b78fd925ce9d31d60cd207f332c9ca8d26ed19e5c864d995"} Nov 25 12:53:04 crc kubenswrapper[4706]: I1125 12:53:04.559286 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6m66" event={"ID":"cb31f3e9-2faa-49f7-9049-19ee1cabe5a1","Type":"ContainerStarted","Data":"9650c0f67e9a924b5dbea8d4c05cd503e38b2ec304a197dba0085230aa1beb7a"} Nov 25 12:53:04 crc kubenswrapper[4706]: I1125 12:53:04.580321 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g6m66" podStartSLOduration=3.111683968 podStartE2EDuration="5.580281386s" podCreationTimestamp="2025-11-25 12:52:59 +0000 UTC" firstStartedPulling="2025-11-25 12:53:01.523908652 +0000 UTC m=+4590.438466023" lastFinishedPulling="2025-11-25 12:53:03.99250606 +0000 UTC m=+4592.907063441" observedRunningTime="2025-11-25 12:53:04.575633359 +0000 UTC m=+4593.490190750" watchObservedRunningTime="2025-11-25 12:53:04.580281386 +0000 UTC m=+4593.494838767" Nov 25 12:53:06 crc kubenswrapper[4706]: I1125 12:53:06.713259 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gdrsm/must-gather-6mkxl"] Nov 25 12:53:06 crc kubenswrapper[4706]: I1125 12:53:06.714118 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-gdrsm/must-gather-6mkxl" podUID="f12cb3ac-00df-48d8-8a57-ab012d97d481" containerName="copy" containerID="cri-o://cd38d7f0eb91fb224087640fc1b4c1c7fff4d9348794934fec3744c855648b1d" gracePeriod=2 Nov 25 12:53:06 crc kubenswrapper[4706]: I1125 12:53:06.723491 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gdrsm/must-gather-6mkxl"] Nov 25 12:53:07 crc kubenswrapper[4706]: I1125 12:53:07.174753 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gdrsm_must-gather-6mkxl_f12cb3ac-00df-48d8-8a57-ab012d97d481/copy/0.log" Nov 25 12:53:07 crc kubenswrapper[4706]: I1125 12:53:07.175376 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gdrsm/must-gather-6mkxl" Nov 25 12:53:07 crc kubenswrapper[4706]: I1125 12:53:07.326678 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw2xc\" (UniqueName: \"kubernetes.io/projected/f12cb3ac-00df-48d8-8a57-ab012d97d481-kube-api-access-mw2xc\") pod \"f12cb3ac-00df-48d8-8a57-ab012d97d481\" (UID: \"f12cb3ac-00df-48d8-8a57-ab012d97d481\") " Nov 25 12:53:07 crc kubenswrapper[4706]: I1125 12:53:07.327038 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f12cb3ac-00df-48d8-8a57-ab012d97d481-must-gather-output\") pod \"f12cb3ac-00df-48d8-8a57-ab012d97d481\" (UID: \"f12cb3ac-00df-48d8-8a57-ab012d97d481\") " Nov 25 12:53:07 crc kubenswrapper[4706]: I1125 12:53:07.334458 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f12cb3ac-00df-48d8-8a57-ab012d97d481-kube-api-access-mw2xc" (OuterVolumeSpecName: "kube-api-access-mw2xc") pod "f12cb3ac-00df-48d8-8a57-ab012d97d481" (UID: "f12cb3ac-00df-48d8-8a57-ab012d97d481"). InnerVolumeSpecName "kube-api-access-mw2xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:53:07 crc kubenswrapper[4706]: I1125 12:53:07.430982 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mw2xc\" (UniqueName: \"kubernetes.io/projected/f12cb3ac-00df-48d8-8a57-ab012d97d481-kube-api-access-mw2xc\") on node \"crc\" DevicePath \"\"" Nov 25 12:53:07 crc kubenswrapper[4706]: I1125 12:53:07.476415 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f12cb3ac-00df-48d8-8a57-ab012d97d481-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "f12cb3ac-00df-48d8-8a57-ab012d97d481" (UID: "f12cb3ac-00df-48d8-8a57-ab012d97d481"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:53:07 crc kubenswrapper[4706]: I1125 12:53:07.532769 4706 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f12cb3ac-00df-48d8-8a57-ab012d97d481-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 25 12:53:07 crc kubenswrapper[4706]: I1125 12:53:07.587617 4706 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gdrsm_must-gather-6mkxl_f12cb3ac-00df-48d8-8a57-ab012d97d481/copy/0.log" Nov 25 12:53:07 crc kubenswrapper[4706]: I1125 12:53:07.588039 4706 generic.go:334] "Generic (PLEG): container finished" podID="f12cb3ac-00df-48d8-8a57-ab012d97d481" containerID="cd38d7f0eb91fb224087640fc1b4c1c7fff4d9348794934fec3744c855648b1d" exitCode=143 Nov 25 12:53:07 crc kubenswrapper[4706]: I1125 12:53:07.588117 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gdrsm/must-gather-6mkxl" Nov 25 12:53:07 crc kubenswrapper[4706]: I1125 12:53:07.588116 4706 scope.go:117] "RemoveContainer" containerID="cd38d7f0eb91fb224087640fc1b4c1c7fff4d9348794934fec3744c855648b1d" Nov 25 12:53:07 crc kubenswrapper[4706]: I1125 12:53:07.608860 4706 scope.go:117] "RemoveContainer" containerID="7c6f480730951901446414868a5e6fbce5374232af68b4256a939265e5a5377c" Nov 25 12:53:07 crc kubenswrapper[4706]: I1125 12:53:07.711045 4706 scope.go:117] "RemoveContainer" containerID="cd38d7f0eb91fb224087640fc1b4c1c7fff4d9348794934fec3744c855648b1d" Nov 25 12:53:07 crc kubenswrapper[4706]: E1125 12:53:07.711819 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd38d7f0eb91fb224087640fc1b4c1c7fff4d9348794934fec3744c855648b1d\": container with ID starting with cd38d7f0eb91fb224087640fc1b4c1c7fff4d9348794934fec3744c855648b1d not found: ID does not exist" containerID="cd38d7f0eb91fb224087640fc1b4c1c7fff4d9348794934fec3744c855648b1d" Nov 25 12:53:07 crc kubenswrapper[4706]: I1125 12:53:07.711858 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd38d7f0eb91fb224087640fc1b4c1c7fff4d9348794934fec3744c855648b1d"} err="failed to get container status \"cd38d7f0eb91fb224087640fc1b4c1c7fff4d9348794934fec3744c855648b1d\": rpc error: code = NotFound desc = could not find container \"cd38d7f0eb91fb224087640fc1b4c1c7fff4d9348794934fec3744c855648b1d\": container with ID starting with cd38d7f0eb91fb224087640fc1b4c1c7fff4d9348794934fec3744c855648b1d not found: ID does not exist" Nov 25 12:53:07 crc kubenswrapper[4706]: I1125 12:53:07.711886 4706 scope.go:117] "RemoveContainer" containerID="7c6f480730951901446414868a5e6fbce5374232af68b4256a939265e5a5377c" Nov 25 12:53:07 crc kubenswrapper[4706]: E1125 12:53:07.712373 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c6f480730951901446414868a5e6fbce5374232af68b4256a939265e5a5377c\": container with ID starting with 7c6f480730951901446414868a5e6fbce5374232af68b4256a939265e5a5377c not found: ID does not exist" containerID="7c6f480730951901446414868a5e6fbce5374232af68b4256a939265e5a5377c" Nov 25 12:53:07 crc kubenswrapper[4706]: I1125 12:53:07.712423 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c6f480730951901446414868a5e6fbce5374232af68b4256a939265e5a5377c"} err="failed to get container status \"7c6f480730951901446414868a5e6fbce5374232af68b4256a939265e5a5377c\": rpc error: code = NotFound desc = could not find container \"7c6f480730951901446414868a5e6fbce5374232af68b4256a939265e5a5377c\": container with ID starting with 7c6f480730951901446414868a5e6fbce5374232af68b4256a939265e5a5377c not found: ID does not exist" Nov 25 12:53:07 crc kubenswrapper[4706]: I1125 12:53:07.931956 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f12cb3ac-00df-48d8-8a57-ab012d97d481" path="/var/lib/kubelet/pods/f12cb3ac-00df-48d8-8a57-ab012d97d481/volumes" Nov 25 12:53:10 crc kubenswrapper[4706]: I1125 12:53:10.184131 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g6m66" Nov 25 12:53:10 crc kubenswrapper[4706]: I1125 12:53:10.185202 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g6m66" Nov 25 12:53:10 crc kubenswrapper[4706]: I1125 12:53:10.244932 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g6m66" Nov 25 12:53:10 crc kubenswrapper[4706]: I1125 12:53:10.672264 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g6m66" Nov 25 12:53:11 crc kubenswrapper[4706]: I1125 12:53:11.415546 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g6m66"] Nov 25 12:53:12 crc kubenswrapper[4706]: I1125 12:53:12.638818 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g6m66" podUID="cb31f3e9-2faa-49f7-9049-19ee1cabe5a1" containerName="registry-server" containerID="cri-o://9650c0f67e9a924b5dbea8d4c05cd503e38b2ec304a197dba0085230aa1beb7a" gracePeriod=2 Nov 25 12:53:13 crc kubenswrapper[4706]: I1125 12:53:13.652612 4706 generic.go:334] "Generic (PLEG): container finished" podID="cb31f3e9-2faa-49f7-9049-19ee1cabe5a1" containerID="9650c0f67e9a924b5dbea8d4c05cd503e38b2ec304a197dba0085230aa1beb7a" exitCode=0 Nov 25 12:53:13 crc kubenswrapper[4706]: I1125 12:53:13.652821 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6m66" event={"ID":"cb31f3e9-2faa-49f7-9049-19ee1cabe5a1","Type":"ContainerDied","Data":"9650c0f67e9a924b5dbea8d4c05cd503e38b2ec304a197dba0085230aa1beb7a"} Nov 25 12:53:13 crc kubenswrapper[4706]: I1125 12:53:13.922743 4706 scope.go:117] "RemoveContainer" containerID="26d31244857a0be0aea5023b5f648b4e573312d8ff6419d5d6b048bd70f84083" Nov 25 12:53:13 crc kubenswrapper[4706]: E1125 12:53:13.923045 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:53:14 crc kubenswrapper[4706]: I1125 12:53:14.420247 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6m66" Nov 25 12:53:14 crc kubenswrapper[4706]: I1125 12:53:14.566525 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtnzb\" (UniqueName: \"kubernetes.io/projected/cb31f3e9-2faa-49f7-9049-19ee1cabe5a1-kube-api-access-dtnzb\") pod \"cb31f3e9-2faa-49f7-9049-19ee1cabe5a1\" (UID: \"cb31f3e9-2faa-49f7-9049-19ee1cabe5a1\") " Nov 25 12:53:14 crc kubenswrapper[4706]: I1125 12:53:14.566747 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb31f3e9-2faa-49f7-9049-19ee1cabe5a1-catalog-content\") pod \"cb31f3e9-2faa-49f7-9049-19ee1cabe5a1\" (UID: \"cb31f3e9-2faa-49f7-9049-19ee1cabe5a1\") " Nov 25 12:53:14 crc kubenswrapper[4706]: I1125 12:53:14.566787 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb31f3e9-2faa-49f7-9049-19ee1cabe5a1-utilities\") pod \"cb31f3e9-2faa-49f7-9049-19ee1cabe5a1\" (UID: \"cb31f3e9-2faa-49f7-9049-19ee1cabe5a1\") " Nov 25 12:53:14 crc kubenswrapper[4706]: I1125 12:53:14.567790 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb31f3e9-2faa-49f7-9049-19ee1cabe5a1-utilities" (OuterVolumeSpecName: "utilities") pod "cb31f3e9-2faa-49f7-9049-19ee1cabe5a1" (UID: "cb31f3e9-2faa-49f7-9049-19ee1cabe5a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:53:14 crc kubenswrapper[4706]: I1125 12:53:14.572203 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb31f3e9-2faa-49f7-9049-19ee1cabe5a1-kube-api-access-dtnzb" (OuterVolumeSpecName: "kube-api-access-dtnzb") pod "cb31f3e9-2faa-49f7-9049-19ee1cabe5a1" (UID: "cb31f3e9-2faa-49f7-9049-19ee1cabe5a1"). InnerVolumeSpecName "kube-api-access-dtnzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:53:14 crc kubenswrapper[4706]: I1125 12:53:14.612987 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb31f3e9-2faa-49f7-9049-19ee1cabe5a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb31f3e9-2faa-49f7-9049-19ee1cabe5a1" (UID: "cb31f3e9-2faa-49f7-9049-19ee1cabe5a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:53:14 crc kubenswrapper[4706]: I1125 12:53:14.663792 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6m66" event={"ID":"cb31f3e9-2faa-49f7-9049-19ee1cabe5a1","Type":"ContainerDied","Data":"5af2a7af6bb05bdd22e41b86add9cb0bb4ea006293434e50f4e397a86ab052d8"} Nov 25 12:53:14 crc kubenswrapper[4706]: I1125 12:53:14.663859 4706 scope.go:117] "RemoveContainer" containerID="9650c0f67e9a924b5dbea8d4c05cd503e38b2ec304a197dba0085230aa1beb7a" Nov 25 12:53:14 crc kubenswrapper[4706]: I1125 12:53:14.663821 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6m66" Nov 25 12:53:14 crc kubenswrapper[4706]: I1125 12:53:14.668568 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb31f3e9-2faa-49f7-9049-19ee1cabe5a1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:53:14 crc kubenswrapper[4706]: I1125 12:53:14.668598 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb31f3e9-2faa-49f7-9049-19ee1cabe5a1-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:53:14 crc kubenswrapper[4706]: I1125 12:53:14.668609 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtnzb\" (UniqueName: \"kubernetes.io/projected/cb31f3e9-2faa-49f7-9049-19ee1cabe5a1-kube-api-access-dtnzb\") on node \"crc\" DevicePath \"\"" Nov 25 12:53:14 crc kubenswrapper[4706]: I1125 12:53:14.699166 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g6m66"] Nov 25 12:53:14 crc kubenswrapper[4706]: I1125 12:53:14.700395 4706 scope.go:117] "RemoveContainer" containerID="23e2b62e81cca7a0b78fd925ce9d31d60cd207f332c9ca8d26ed19e5c864d995" Nov 25 12:53:14 crc kubenswrapper[4706]: I1125 12:53:14.709102 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g6m66"] Nov 25 12:53:14 crc kubenswrapper[4706]: I1125 12:53:14.725114 4706 scope.go:117] "RemoveContainer" containerID="8285802a8abdd0a8408164191848151cd42a40b6ccdc86c89cda82c5c34653c3" Nov 25 12:53:15 crc kubenswrapper[4706]: I1125 12:53:15.935225 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb31f3e9-2faa-49f7-9049-19ee1cabe5a1" path="/var/lib/kubelet/pods/cb31f3e9-2faa-49f7-9049-19ee1cabe5a1/volumes" Nov 25 12:53:27 crc kubenswrapper[4706]: I1125 12:53:27.922320 4706 scope.go:117] "RemoveContainer" containerID="26d31244857a0be0aea5023b5f648b4e573312d8ff6419d5d6b048bd70f84083" Nov 25 12:53:27 crc kubenswrapper[4706]: E1125 12:53:27.923194 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:53:41 crc kubenswrapper[4706]: I1125 12:53:41.929203 4706 scope.go:117] "RemoveContainer" containerID="26d31244857a0be0aea5023b5f648b4e573312d8ff6419d5d6b048bd70f84083" Nov 25 12:53:41 crc kubenswrapper[4706]: E1125 12:53:41.929946 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:53:52 crc kubenswrapper[4706]: I1125 12:53:52.924516 4706 scope.go:117] "RemoveContainer" containerID="26d31244857a0be0aea5023b5f648b4e573312d8ff6419d5d6b048bd70f84083" Nov 25 12:53:52 crc kubenswrapper[4706]: E1125 12:53:52.925414 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:54:04 crc kubenswrapper[4706]: I1125 12:54:04.923049 4706 scope.go:117] "RemoveContainer" containerID="26d31244857a0be0aea5023b5f648b4e573312d8ff6419d5d6b048bd70f84083" Nov 25 12:54:04 crc kubenswrapper[4706]: E1125 12:54:04.923852 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:54:17 crc kubenswrapper[4706]: I1125 12:54:17.922638 4706 scope.go:117] "RemoveContainer" containerID="26d31244857a0be0aea5023b5f648b4e573312d8ff6419d5d6b048bd70f84083" Nov 25 12:54:17 crc kubenswrapper[4706]: E1125 12:54:17.923347 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:54:28 crc kubenswrapper[4706]: I1125 12:54:28.923286 4706 scope.go:117] "RemoveContainer" containerID="26d31244857a0be0aea5023b5f648b4e573312d8ff6419d5d6b048bd70f84083" Nov 25 12:54:28 crc kubenswrapper[4706]: E1125 12:54:28.924219 4706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dhfpm_openshift-machine-config-operator(0930887a-320c-4506-8c9c-f94d6d64516a)\"" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" podUID="0930887a-320c-4506-8c9c-f94d6d64516a" Nov 25 12:54:38 crc kubenswrapper[4706]: I1125 12:54:38.562270 4706 scope.go:117] "RemoveContainer" containerID="5e099d2ca034c736e522c65f7fd2981ea02baf10f16322960b6d60756eb95235" Nov 25 12:54:43 crc kubenswrapper[4706]: I1125 12:54:43.922357 4706 scope.go:117] "RemoveContainer" containerID="26d31244857a0be0aea5023b5f648b4e573312d8ff6419d5d6b048bd70f84083" Nov 25 12:54:44 crc kubenswrapper[4706]: I1125 12:54:44.520507 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhfpm" event={"ID":"0930887a-320c-4506-8c9c-f94d6d64516a","Type":"ContainerStarted","Data":"c3bc9e81b5ea17934f8e25547de622220f3a35a5215466283d617c6fcc5cb452"} Nov 25 12:55:04 crc kubenswrapper[4706]: I1125 12:55:04.816087 4706 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2jm44"] Nov 25 12:55:04 crc kubenswrapper[4706]: E1125 12:55:04.818018 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb31f3e9-2faa-49f7-9049-19ee1cabe5a1" containerName="extract-content" Nov 25 12:55:04 crc kubenswrapper[4706]: I1125 12:55:04.818057 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb31f3e9-2faa-49f7-9049-19ee1cabe5a1" containerName="extract-content" Nov 25 12:55:04 crc kubenswrapper[4706]: E1125 12:55:04.818084 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb31f3e9-2faa-49f7-9049-19ee1cabe5a1" containerName="extract-utilities" Nov 25 12:55:04 crc kubenswrapper[4706]: I1125 12:55:04.818096 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb31f3e9-2faa-49f7-9049-19ee1cabe5a1" containerName="extract-utilities" Nov 25 12:55:04 crc kubenswrapper[4706]: E1125 12:55:04.818139 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f12cb3ac-00df-48d8-8a57-ab012d97d481" containerName="copy" Nov 25 12:55:04 crc kubenswrapper[4706]: I1125 12:55:04.818149 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="f12cb3ac-00df-48d8-8a57-ab012d97d481" containerName="copy" Nov 25 12:55:04 crc kubenswrapper[4706]: E1125 12:55:04.818171 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f12cb3ac-00df-48d8-8a57-ab012d97d481" containerName="gather" Nov 25 12:55:04 crc kubenswrapper[4706]: I1125 12:55:04.818182 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="f12cb3ac-00df-48d8-8a57-ab012d97d481" containerName="gather" Nov 25 12:55:04 crc kubenswrapper[4706]: E1125 12:55:04.818230 4706 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb31f3e9-2faa-49f7-9049-19ee1cabe5a1" containerName="registry-server" Nov 25 12:55:04 crc kubenswrapper[4706]: I1125 12:55:04.818242 4706 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb31f3e9-2faa-49f7-9049-19ee1cabe5a1" containerName="registry-server" Nov 25 12:55:04 crc kubenswrapper[4706]: I1125 12:55:04.818645 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb31f3e9-2faa-49f7-9049-19ee1cabe5a1" containerName="registry-server" Nov 25 12:55:04 crc kubenswrapper[4706]: I1125 12:55:04.818681 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="f12cb3ac-00df-48d8-8a57-ab012d97d481" containerName="copy" Nov 25 12:55:04 crc kubenswrapper[4706]: I1125 12:55:04.818721 4706 memory_manager.go:354] "RemoveStaleState removing state" podUID="f12cb3ac-00df-48d8-8a57-ab012d97d481" containerName="gather" Nov 25 12:55:04 crc kubenswrapper[4706]: I1125 12:55:04.820968 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2jm44" Nov 25 12:55:04 crc kubenswrapper[4706]: I1125 12:55:04.846443 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jm44"] Nov 25 12:55:04 crc kubenswrapper[4706]: I1125 12:55:04.860756 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/713907e0-63aa-44f5-a8a2-816ca482ac29-utilities\") pod \"redhat-marketplace-2jm44\" (UID: \"713907e0-63aa-44f5-a8a2-816ca482ac29\") " pod="openshift-marketplace/redhat-marketplace-2jm44" Nov 25 12:55:04 crc kubenswrapper[4706]: I1125 12:55:04.861040 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkfsx\" (UniqueName: \"kubernetes.io/projected/713907e0-63aa-44f5-a8a2-816ca482ac29-kube-api-access-hkfsx\") pod \"redhat-marketplace-2jm44\" (UID: \"713907e0-63aa-44f5-a8a2-816ca482ac29\") " pod="openshift-marketplace/redhat-marketplace-2jm44" Nov 25 12:55:04 crc kubenswrapper[4706]: I1125 12:55:04.861148 4706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/713907e0-63aa-44f5-a8a2-816ca482ac29-catalog-content\") pod \"redhat-marketplace-2jm44\" (UID: \"713907e0-63aa-44f5-a8a2-816ca482ac29\") " pod="openshift-marketplace/redhat-marketplace-2jm44" Nov 25 12:55:04 crc kubenswrapper[4706]: I1125 12:55:04.963052 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/713907e0-63aa-44f5-a8a2-816ca482ac29-catalog-content\") pod \"redhat-marketplace-2jm44\" (UID: \"713907e0-63aa-44f5-a8a2-816ca482ac29\") " pod="openshift-marketplace/redhat-marketplace-2jm44" Nov 25 12:55:04 crc kubenswrapper[4706]: I1125 12:55:04.963164 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/713907e0-63aa-44f5-a8a2-816ca482ac29-utilities\") pod \"redhat-marketplace-2jm44\" (UID: \"713907e0-63aa-44f5-a8a2-816ca482ac29\") " pod="openshift-marketplace/redhat-marketplace-2jm44" Nov 25 12:55:04 crc kubenswrapper[4706]: I1125 12:55:04.963328 4706 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkfsx\" (UniqueName: \"kubernetes.io/projected/713907e0-63aa-44f5-a8a2-816ca482ac29-kube-api-access-hkfsx\") pod \"redhat-marketplace-2jm44\" (UID: \"713907e0-63aa-44f5-a8a2-816ca482ac29\") " pod="openshift-marketplace/redhat-marketplace-2jm44" Nov 25 12:55:04 crc kubenswrapper[4706]: I1125 12:55:04.964246 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/713907e0-63aa-44f5-a8a2-816ca482ac29-utilities\") pod \"redhat-marketplace-2jm44\" (UID: \"713907e0-63aa-44f5-a8a2-816ca482ac29\") " pod="openshift-marketplace/redhat-marketplace-2jm44" Nov 25 12:55:04 crc kubenswrapper[4706]: I1125 12:55:04.964336 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/713907e0-63aa-44f5-a8a2-816ca482ac29-catalog-content\") pod \"redhat-marketplace-2jm44\" (UID: \"713907e0-63aa-44f5-a8a2-816ca482ac29\") " pod="openshift-marketplace/redhat-marketplace-2jm44" Nov 25 12:55:04 crc kubenswrapper[4706]: I1125 12:55:04.993258 4706 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkfsx\" (UniqueName: \"kubernetes.io/projected/713907e0-63aa-44f5-a8a2-816ca482ac29-kube-api-access-hkfsx\") pod \"redhat-marketplace-2jm44\" (UID: \"713907e0-63aa-44f5-a8a2-816ca482ac29\") " pod="openshift-marketplace/redhat-marketplace-2jm44" Nov 25 12:55:05 crc kubenswrapper[4706]: I1125 12:55:05.145249 4706 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2jm44" Nov 25 12:55:05 crc kubenswrapper[4706]: I1125 12:55:05.594804 4706 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jm44"] Nov 25 12:55:05 crc kubenswrapper[4706]: I1125 12:55:05.740101 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jm44" event={"ID":"713907e0-63aa-44f5-a8a2-816ca482ac29","Type":"ContainerStarted","Data":"1e13923f16bedea07252c5c80365ed052d14a5ca9c61f4f653a68de408a18641"} Nov 25 12:55:06 crc kubenswrapper[4706]: I1125 12:55:06.751527 4706 generic.go:334] "Generic (PLEG): container finished" podID="713907e0-63aa-44f5-a8a2-816ca482ac29" containerID="b2f2a24587574b8311fa3743ca34d4630bb835e4598672b57f263192927102fe" exitCode=0 Nov 25 12:55:06 crc kubenswrapper[4706]: I1125 12:55:06.751608 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jm44" event={"ID":"713907e0-63aa-44f5-a8a2-816ca482ac29","Type":"ContainerDied","Data":"b2f2a24587574b8311fa3743ca34d4630bb835e4598672b57f263192927102fe"} Nov 25 12:55:08 crc kubenswrapper[4706]: I1125 12:55:08.770512 4706 generic.go:334] "Generic (PLEG): container finished" podID="713907e0-63aa-44f5-a8a2-816ca482ac29" containerID="50c9a7670fc56f506ba2044e55f6599f871c8bca70492e705e0491e448c79bac" exitCode=0 Nov 25 12:55:08 crc kubenswrapper[4706]: I1125 12:55:08.770659 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jm44" event={"ID":"713907e0-63aa-44f5-a8a2-816ca482ac29","Type":"ContainerDied","Data":"50c9a7670fc56f506ba2044e55f6599f871c8bca70492e705e0491e448c79bac"} Nov 25 12:55:09 crc kubenswrapper[4706]: I1125 12:55:09.782572 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jm44" event={"ID":"713907e0-63aa-44f5-a8a2-816ca482ac29","Type":"ContainerStarted","Data":"f666f962658c0b2068b948c92f96b96915b2fdbb2103076a0fa3493eb77deae2"} Nov 25 12:55:09 crc kubenswrapper[4706]: I1125 12:55:09.806408 4706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2jm44" podStartSLOduration=3.353583442 podStartE2EDuration="5.806375643s" podCreationTimestamp="2025-11-25 12:55:04 +0000 UTC" firstStartedPulling="2025-11-25 12:55:06.753450645 +0000 UTC m=+4715.668008026" lastFinishedPulling="2025-11-25 12:55:09.206242846 +0000 UTC m=+4718.120800227" observedRunningTime="2025-11-25 12:55:09.801744336 +0000 UTC m=+4718.716301717" watchObservedRunningTime="2025-11-25 12:55:09.806375643 +0000 UTC m=+4718.720933024" Nov 25 12:55:15 crc kubenswrapper[4706]: I1125 12:55:15.145520 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2jm44" Nov 25 12:55:15 crc kubenswrapper[4706]: I1125 12:55:15.146101 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2jm44" Nov 25 12:55:15 crc kubenswrapper[4706]: I1125 12:55:15.199495 4706 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2jm44" Nov 25 12:55:16 crc kubenswrapper[4706]: I1125 12:55:16.559037 4706 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2jm44" Nov 25 12:55:16 crc kubenswrapper[4706]: I1125 12:55:16.608379 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jm44"] Nov 25 12:55:17 crc kubenswrapper[4706]: I1125 12:55:17.856258 4706 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2jm44" podUID="713907e0-63aa-44f5-a8a2-816ca482ac29" containerName="registry-server" containerID="cri-o://f666f962658c0b2068b948c92f96b96915b2fdbb2103076a0fa3493eb77deae2" gracePeriod=2 Nov 25 12:55:18 crc kubenswrapper[4706]: I1125 12:55:18.633096 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2jm44" Nov 25 12:55:18 crc kubenswrapper[4706]: I1125 12:55:18.769609 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkfsx\" (UniqueName: \"kubernetes.io/projected/713907e0-63aa-44f5-a8a2-816ca482ac29-kube-api-access-hkfsx\") pod \"713907e0-63aa-44f5-a8a2-816ca482ac29\" (UID: \"713907e0-63aa-44f5-a8a2-816ca482ac29\") " Nov 25 12:55:18 crc kubenswrapper[4706]: I1125 12:55:18.769676 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/713907e0-63aa-44f5-a8a2-816ca482ac29-utilities\") pod \"713907e0-63aa-44f5-a8a2-816ca482ac29\" (UID: \"713907e0-63aa-44f5-a8a2-816ca482ac29\") " Nov 25 12:55:18 crc kubenswrapper[4706]: I1125 12:55:18.769751 4706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/713907e0-63aa-44f5-a8a2-816ca482ac29-catalog-content\") pod \"713907e0-63aa-44f5-a8a2-816ca482ac29\" (UID: \"713907e0-63aa-44f5-a8a2-816ca482ac29\") " Nov 25 12:55:18 crc kubenswrapper[4706]: I1125 12:55:18.770814 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/713907e0-63aa-44f5-a8a2-816ca482ac29-utilities" (OuterVolumeSpecName: "utilities") pod "713907e0-63aa-44f5-a8a2-816ca482ac29" (UID: "713907e0-63aa-44f5-a8a2-816ca482ac29"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:55:18 crc kubenswrapper[4706]: I1125 12:55:18.775486 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/713907e0-63aa-44f5-a8a2-816ca482ac29-kube-api-access-hkfsx" (OuterVolumeSpecName: "kube-api-access-hkfsx") pod "713907e0-63aa-44f5-a8a2-816ca482ac29" (UID: "713907e0-63aa-44f5-a8a2-816ca482ac29"). InnerVolumeSpecName "kube-api-access-hkfsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:55:18 crc kubenswrapper[4706]: I1125 12:55:18.792596 4706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/713907e0-63aa-44f5-a8a2-816ca482ac29-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "713907e0-63aa-44f5-a8a2-816ca482ac29" (UID: "713907e0-63aa-44f5-a8a2-816ca482ac29"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:55:18 crc kubenswrapper[4706]: I1125 12:55:18.866822 4706 generic.go:334] "Generic (PLEG): container finished" podID="713907e0-63aa-44f5-a8a2-816ca482ac29" containerID="f666f962658c0b2068b948c92f96b96915b2fdbb2103076a0fa3493eb77deae2" exitCode=0 Nov 25 12:55:18 crc kubenswrapper[4706]: I1125 12:55:18.866869 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jm44" event={"ID":"713907e0-63aa-44f5-a8a2-816ca482ac29","Type":"ContainerDied","Data":"f666f962658c0b2068b948c92f96b96915b2fdbb2103076a0fa3493eb77deae2"} Nov 25 12:55:18 crc kubenswrapper[4706]: I1125 12:55:18.866919 4706 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jm44" event={"ID":"713907e0-63aa-44f5-a8a2-816ca482ac29","Type":"ContainerDied","Data":"1e13923f16bedea07252c5c80365ed052d14a5ca9c61f4f653a68de408a18641"} Nov 25 12:55:18 crc kubenswrapper[4706]: I1125 12:55:18.866930 4706 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2jm44" Nov 25 12:55:18 crc kubenswrapper[4706]: I1125 12:55:18.866941 4706 scope.go:117] "RemoveContainer" containerID="f666f962658c0b2068b948c92f96b96915b2fdbb2103076a0fa3493eb77deae2" Nov 25 12:55:18 crc kubenswrapper[4706]: I1125 12:55:18.872239 4706 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkfsx\" (UniqueName: \"kubernetes.io/projected/713907e0-63aa-44f5-a8a2-816ca482ac29-kube-api-access-hkfsx\") on node \"crc\" DevicePath \"\"" Nov 25 12:55:18 crc kubenswrapper[4706]: I1125 12:55:18.872333 4706 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/713907e0-63aa-44f5-a8a2-816ca482ac29-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:55:18 crc kubenswrapper[4706]: I1125 12:55:18.872359 4706 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/713907e0-63aa-44f5-a8a2-816ca482ac29-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:55:18 crc kubenswrapper[4706]: I1125 12:55:18.888260 4706 scope.go:117] "RemoveContainer" containerID="50c9a7670fc56f506ba2044e55f6599f871c8bca70492e705e0491e448c79bac" Nov 25 12:55:18 crc kubenswrapper[4706]: I1125 12:55:18.913071 4706 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jm44"] Nov 25 12:55:18 crc kubenswrapper[4706]: I1125 12:55:18.913350 4706 scope.go:117] "RemoveContainer" containerID="b2f2a24587574b8311fa3743ca34d4630bb835e4598672b57f263192927102fe" Nov 25 12:55:18 crc kubenswrapper[4706]: I1125 12:55:18.922682 4706 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jm44"] Nov 25 12:55:18 crc kubenswrapper[4706]: I1125 12:55:18.965397 4706 scope.go:117] "RemoveContainer" containerID="f666f962658c0b2068b948c92f96b96915b2fdbb2103076a0fa3493eb77deae2" Nov 25 12:55:18 crc kubenswrapper[4706]: E1125 12:55:18.965863 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f666f962658c0b2068b948c92f96b96915b2fdbb2103076a0fa3493eb77deae2\": container with ID starting with f666f962658c0b2068b948c92f96b96915b2fdbb2103076a0fa3493eb77deae2 not found: ID does not exist" containerID="f666f962658c0b2068b948c92f96b96915b2fdbb2103076a0fa3493eb77deae2" Nov 25 12:55:18 crc kubenswrapper[4706]: I1125 12:55:18.965913 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f666f962658c0b2068b948c92f96b96915b2fdbb2103076a0fa3493eb77deae2"} err="failed to get container status \"f666f962658c0b2068b948c92f96b96915b2fdbb2103076a0fa3493eb77deae2\": rpc error: code = NotFound desc = could not find container \"f666f962658c0b2068b948c92f96b96915b2fdbb2103076a0fa3493eb77deae2\": container with ID starting with f666f962658c0b2068b948c92f96b96915b2fdbb2103076a0fa3493eb77deae2 not found: ID does not exist" Nov 25 12:55:18 crc kubenswrapper[4706]: I1125 12:55:18.965950 4706 scope.go:117] "RemoveContainer" containerID="50c9a7670fc56f506ba2044e55f6599f871c8bca70492e705e0491e448c79bac" Nov 25 12:55:18 crc kubenswrapper[4706]: E1125 12:55:18.966331 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50c9a7670fc56f506ba2044e55f6599f871c8bca70492e705e0491e448c79bac\": container with ID starting with 50c9a7670fc56f506ba2044e55f6599f871c8bca70492e705e0491e448c79bac not found: ID does not exist" containerID="50c9a7670fc56f506ba2044e55f6599f871c8bca70492e705e0491e448c79bac" Nov 25 12:55:18 crc kubenswrapper[4706]: I1125 12:55:18.966365 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50c9a7670fc56f506ba2044e55f6599f871c8bca70492e705e0491e448c79bac"} err="failed to get container status \"50c9a7670fc56f506ba2044e55f6599f871c8bca70492e705e0491e448c79bac\": rpc error: code = NotFound desc = could not find container \"50c9a7670fc56f506ba2044e55f6599f871c8bca70492e705e0491e448c79bac\": container with ID starting with 50c9a7670fc56f506ba2044e55f6599f871c8bca70492e705e0491e448c79bac not found: ID does not exist" Nov 25 12:55:18 crc kubenswrapper[4706]: I1125 12:55:18.966389 4706 scope.go:117] "RemoveContainer" containerID="b2f2a24587574b8311fa3743ca34d4630bb835e4598672b57f263192927102fe" Nov 25 12:55:18 crc kubenswrapper[4706]: E1125 12:55:18.966683 4706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2f2a24587574b8311fa3743ca34d4630bb835e4598672b57f263192927102fe\": container with ID starting with b2f2a24587574b8311fa3743ca34d4630bb835e4598672b57f263192927102fe not found: ID does not exist" containerID="b2f2a24587574b8311fa3743ca34d4630bb835e4598672b57f263192927102fe" Nov 25 12:55:18 crc kubenswrapper[4706]: I1125 12:55:18.966719 4706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2f2a24587574b8311fa3743ca34d4630bb835e4598672b57f263192927102fe"} err="failed to get container status \"b2f2a24587574b8311fa3743ca34d4630bb835e4598672b57f263192927102fe\": rpc error: code = NotFound desc = could not find container \"b2f2a24587574b8311fa3743ca34d4630bb835e4598672b57f263192927102fe\": container with ID starting with b2f2a24587574b8311fa3743ca34d4630bb835e4598672b57f263192927102fe not found: ID does not exist" Nov 25 12:55:19 crc kubenswrapper[4706]: I1125 12:55:19.935779 4706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="713907e0-63aa-44f5-a8a2-816ca482ac29" path="/var/lib/kubelet/pods/713907e0-63aa-44f5-a8a2-816ca482ac29/volumes"